00:00:00.001 Started by upstream project "autotest-per-patch" build number 130920 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.024 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.069 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.162 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.163 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.317 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.329 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.340 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:02.340 > git config core.sparsecheckout # timeout=10 00:00:02.351 > git read-tree -mu HEAD # timeout=10 00:00:02.364 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:02.385 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:02.386 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:02.496 [Pipeline] Start of Pipeline 00:00:02.511 [Pipeline] library 00:00:02.513 Loading library shm_lib@master 00:00:02.513 Library shm_lib@master is cached. Copying from home. 00:00:02.534 [Pipeline] node 00:00:02.540 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.543 [Pipeline] { 00:00:02.561 [Pipeline] catchError 00:00:02.562 [Pipeline] { 00:00:02.572 [Pipeline] wrap 00:00:02.579 [Pipeline] { 00:00:02.585 [Pipeline] stage 00:00:02.587 [Pipeline] { (Prologue) 00:00:02.603 [Pipeline] echo 00:00:02.604 Node: VM-host-SM0 00:00:02.608 [Pipeline] cleanWs 00:00:02.617 [WS-CLEANUP] Deleting project workspace... 00:00:02.617 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.624 [WS-CLEANUP] done 00:00:02.827 [Pipeline] setCustomBuildProperty 00:00:02.919 [Pipeline] httpRequest 00:00:03.695 [Pipeline] echo 00:00:03.696 Sorcerer 10.211.164.101 is alive 00:00:03.705 [Pipeline] retry 00:00:03.707 [Pipeline] { 00:00:03.719 [Pipeline] httpRequest 00:00:03.724 HttpMethod: GET 00:00:03.725 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.725 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.733 Response Code: HTTP/1.1 200 OK 00:00:03.733 Success: Status code 200 is in the accepted range: 200,404 00:00:03.734 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.355 [Pipeline] } 00:00:11.366 [Pipeline] // retry 00:00:11.371 [Pipeline] sh 00:00:11.654 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.668 [Pipeline] httpRequest 00:00:12.139 [Pipeline] echo 00:00:12.141 Sorcerer 10.211.164.101 is alive 00:00:12.151 [Pipeline] retry 00:00:12.153 [Pipeline] { 00:00:12.167 [Pipeline] httpRequest 00:00:12.172 HttpMethod: GET 00:00:12.173 URL: http://10.211.164.101/packages/spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:00:12.173 Sending request to url: http://10.211.164.101/packages/spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:00:12.184 Response Code: HTTP/1.1 200 OK 00:00:12.184 Success: Status code 200 is in the accepted range: 200,404 00:00:12.185 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:01:26.872 [Pipeline] } 00:01:26.888 [Pipeline] // retry 00:01:26.895 [Pipeline] sh 00:01:27.173 + tar --no-same-owner -xf spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:01:30.466 [Pipeline] sh 00:01:30.755 + git -C spdk log --oneline -n5 00:01:30.755 ba5b39cb2 thread: Extended options for spdk_interrupt_register 00:01:30.755 52e9db722 util: allow a fd_group to manage all its fds 00:01:30.755 6082eddb0 util: fix total fds to wait for 00:01:30.755 8ce2f3c7d util: handle events for vfio fd type 00:01:30.755 381b6895f util: Extended options for spdk_fd_group_add 00:01:30.772 [Pipeline] writeFile 00:01:30.790 [Pipeline] sh 00:01:31.069 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.080 [Pipeline] sh 00:01:31.358 + cat autorun-spdk.conf 00:01:31.358 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.358 SPDK_RUN_ASAN=1 00:01:31.358 SPDK_RUN_UBSAN=1 00:01:31.358 SPDK_TEST_RAID=1 00:01:31.358 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.364 RUN_NIGHTLY=0 00:01:31.366 [Pipeline] } 00:01:31.380 [Pipeline] // stage 00:01:31.395 [Pipeline] stage 00:01:31.397 [Pipeline] { (Run VM) 00:01:31.410 [Pipeline] sh 00:01:31.689 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.689 + echo 'Start stage prepare_nvme.sh' 00:01:31.689 Start stage prepare_nvme.sh 00:01:31.689 + [[ -n 2 ]] 00:01:31.689 + disk_prefix=ex2 00:01:31.689 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:31.689 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:31.689 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:31.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.689 ++ SPDK_RUN_ASAN=1 00:01:31.689 ++ SPDK_RUN_UBSAN=1 00:01:31.689 ++ SPDK_TEST_RAID=1 00:01:31.689 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.689 ++ RUN_NIGHTLY=0 00:01:31.689 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:31.689 + nvme_files=() 00:01:31.689 + declare -A nvme_files 00:01:31.689 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.689 + nvme_files['nvme.img']=5G 00:01:31.689 + nvme_files['nvme-cmb.img']=5G 00:01:31.689 + nvme_files['nvme-multi0.img']=4G 00:01:31.689 + nvme_files['nvme-multi1.img']=4G 00:01:31.689 + nvme_files['nvme-multi2.img']=4G 00:01:31.689 + nvme_files['nvme-openstack.img']=8G 00:01:31.689 + nvme_files['nvme-zns.img']=5G 00:01:31.689 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.689 + (( SPDK_TEST_FTL == 1 )) 00:01:31.689 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.689 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.689 + for nvme in "${!nvme_files[@]}" 00:01:31.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:31.689 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.689 + for nvme in "${!nvme_files[@]}" 00:01:31.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:31.689 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.689 + for nvme in "${!nvme_files[@]}" 00:01:31.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:31.689 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.689 + for nvme in "${!nvme_files[@]}" 00:01:31.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:31.689 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.689 + for nvme in "${!nvme_files[@]}" 00:01:31.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:31.689 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.689 + for nvme in "${!nvme_files[@]}" 00:01:31.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:31.947 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.947 + for nvme in "${!nvme_files[@]}" 00:01:31.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:32.205 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.205 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:32.205 + echo 'End stage prepare_nvme.sh' 00:01:32.205 End stage prepare_nvme.sh 00:01:32.215 [Pipeline] sh 00:01:32.494 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.494 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:32.494 00:01:32.494 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:32.494 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:32.494 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:32.494 HELP=0 00:01:32.494 DRY_RUN=0 00:01:32.494 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:32.494 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.494 NVME_AUTO_CREATE=0 00:01:32.494 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:32.494 NVME_CMB=,, 00:01:32.494 NVME_PMR=,, 00:01:32.494 NVME_ZNS=,, 00:01:32.494 NVME_MS=,, 00:01:32.494 NVME_FDP=,, 00:01:32.494 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.494 SPDK_VAGRANT_VMCPU=10 00:01:32.494 SPDK_VAGRANT_VMRAM=12288 00:01:32.494 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.494 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.494 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.494 SPDK_OPENSTACK_NETWORK=0 00:01:32.494 VAGRANT_PACKAGE_BOX=0 00:01:32.494 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.494 FORCE_DISTRO=true 00:01:32.494 VAGRANT_BOX_VERSION= 00:01:32.494 EXTRA_VAGRANTFILES= 00:01:32.494 NIC_MODEL=e1000 00:01:32.494 00:01:32.494 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:32.494 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:35.777 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.782 ==> default: Creating image (snapshot of base box volume). 00:01:36.782 ==> default: Creating domain with the following settings... 00:01:36.782 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728403769_c446bd10d04fce8827a0 00:01:36.782 ==> default: -- Domain type: kvm 00:01:36.782 ==> default: -- Cpus: 10 00:01:36.782 ==> default: -- Feature: acpi 00:01:36.782 ==> default: -- Feature: apic 00:01:36.782 ==> default: -- Feature: pae 00:01:36.782 ==> default: -- Memory: 12288M 00:01:36.782 ==> default: -- Memory Backing: hugepages: 00:01:36.782 ==> default: -- Management MAC: 00:01:36.782 ==> default: -- Loader: 00:01:36.782 ==> default: -- Nvram: 00:01:36.782 ==> default: -- Base box: spdk/fedora39 00:01:36.782 ==> default: -- Storage pool: default 00:01:36.782 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728403769_c446bd10d04fce8827a0.img (20G) 00:01:36.782 ==> default: -- Volume Cache: default 00:01:36.782 ==> default: -- Kernel: 00:01:36.782 ==> default: -- Initrd: 00:01:36.782 ==> default: -- Graphics Type: vnc 00:01:36.782 ==> default: -- Graphics Port: -1 00:01:36.782 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.782 ==> default: -- Graphics Password: Not defined 00:01:36.782 ==> default: -- Video Type: cirrus 00:01:36.782 ==> default: -- Video VRAM: 9216 00:01:36.782 ==> default: -- Sound Type: 00:01:36.782 ==> default: -- Keymap: en-us 00:01:36.782 ==> default: -- TPM Path: 00:01:36.782 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.782 ==> default: -- Command line args: 00:01:36.782 ==> default: -> value=-device, 00:01:36.782 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:36.782 ==> default: -> value=-drive, 00:01:36.782 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.782 ==> default: -> value=-device, 00:01:36.782 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.782 ==> default: -> value=-device, 00:01:36.782 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:36.782 ==> default: -> value=-drive, 00:01:36.782 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:36.782 ==> default: -> value=-device, 00:01:36.782 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.782 ==> default: -> value=-drive, 00:01:36.782 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:36.782 ==> default: -> value=-device, 00:01:36.782 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.782 ==> default: -> value=-drive, 00:01:36.782 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:36.782 ==> default: -> value=-device, 00:01:36.782 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.042 ==> default: Creating shared folders metadata... 00:01:37.042 ==> default: Starting domain. 00:01:38.944 ==> default: Waiting for domain to get an IP address... 00:01:57.051 ==> default: Waiting for SSH to become available... 00:01:58.433 ==> default: Configuring and enabling network interfaces... 00:02:02.614 default: SSH address: 192.168.121.242:22 00:02:02.614 default: SSH username: vagrant 00:02:02.614 default: SSH auth method: private key 00:02:05.151 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.285 ==> default: Mounting SSHFS shared folder... 00:02:14.219 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:14.219 ==> default: Checking Mount.. 00:02:15.622 ==> default: Folder Successfully Mounted! 00:02:15.622 ==> default: Running provisioner: file... 00:02:16.190 default: ~/.gitconfig => .gitconfig 00:02:16.758 00:02:16.758 SUCCESS! 00:02:16.758 00:02:16.758 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:16.758 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:16.758 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:16.758 00:02:16.767 [Pipeline] } 00:02:16.782 [Pipeline] // stage 00:02:16.792 [Pipeline] dir 00:02:16.793 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:16.795 [Pipeline] { 00:02:16.808 [Pipeline] catchError 00:02:16.809 [Pipeline] { 00:02:16.822 [Pipeline] sh 00:02:17.100 + vagrant ssh-config --host vagrant 00:02:17.100 + sed -ne /^Host/,$p 00:02:17.100 + tee ssh_conf 00:02:21.289 Host vagrant 00:02:21.289 HostName 192.168.121.242 00:02:21.289 User vagrant 00:02:21.289 Port 22 00:02:21.289 UserKnownHostsFile /dev/null 00:02:21.289 StrictHostKeyChecking no 00:02:21.289 PasswordAuthentication no 00:02:21.289 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:21.289 IdentitiesOnly yes 00:02:21.289 LogLevel FATAL 00:02:21.289 ForwardAgent yes 00:02:21.289 ForwardX11 yes 00:02:21.289 00:02:21.303 [Pipeline] withEnv 00:02:21.305 [Pipeline] { 00:02:21.316 [Pipeline] sh 00:02:21.594 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.594 source /etc/os-release 00:02:21.594 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.594 # Minimal, systemd-like check. 00:02:21.594 if [[ -e /.dockerenv ]]; then 00:02:21.594 # Clear garbage from the node's name: 00:02:21.594 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.594 # $HOSTNAME is the actual container id 00:02:21.594 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.594 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.594 # We can assume this is a mount from a host where container is running, 00:02:21.594 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.594 container="$(< /etc/hostname) ($agent)" 00:02:21.594 else 00:02:21.594 # Fallback 00:02:21.594 container=$agent 00:02:21.594 fi 00:02:21.594 fi 00:02:21.594 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.594 00:02:21.863 [Pipeline] } 00:02:21.880 [Pipeline] // withEnv 00:02:21.888 [Pipeline] setCustomBuildProperty 00:02:21.902 [Pipeline] stage 00:02:21.904 [Pipeline] { (Tests) 00:02:21.922 [Pipeline] sh 00:02:22.203 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:22.473 [Pipeline] sh 00:02:22.753 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:23.026 [Pipeline] timeout 00:02:23.026 Timeout set to expire in 1 hr 30 min 00:02:23.028 [Pipeline] { 00:02:23.042 [Pipeline] sh 00:02:23.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:23.913 HEAD is now at ba5b39cb2 thread: Extended options for spdk_interrupt_register 00:02:23.925 [Pipeline] sh 00:02:24.206 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.476 [Pipeline] sh 00:02:24.756 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.029 [Pipeline] sh 00:02:25.307 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:25.565 ++ readlink -f spdk_repo 00:02:25.565 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.565 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.565 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.565 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.565 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.565 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.565 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.565 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:25.565 + cd /home/vagrant/spdk_repo 00:02:25.565 + source /etc/os-release 00:02:25.565 ++ NAME='Fedora Linux' 00:02:25.565 ++ VERSION='39 (Cloud Edition)' 00:02:25.565 ++ ID=fedora 00:02:25.565 ++ VERSION_ID=39 00:02:25.565 ++ VERSION_CODENAME= 00:02:25.565 ++ PLATFORM_ID=platform:f39 00:02:25.565 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.565 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.565 ++ LOGO=fedora-logo-icon 00:02:25.565 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.565 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.565 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.565 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.565 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.565 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.565 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.565 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.565 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.565 ++ SUPPORT_END=2024-11-12 00:02:25.565 ++ VARIANT='Cloud Edition' 00:02:25.565 ++ VARIANT_ID=cloud 00:02:25.565 + uname -a 00:02:25.565 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.565 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:25.822 Hugepages 00:02:25.822 node hugesize free / total 00:02:25.822 node0 1048576kB 0 / 0 00:02:25.822 node0 2048kB 0 / 0 00:02:25.822 00:02:25.822 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.095 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:26.095 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:26.095 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:26.095 + rm -f /tmp/spdk-ld-path 00:02:26.095 + source autorun-spdk.conf 00:02:26.095 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.095 ++ SPDK_RUN_ASAN=1 00:02:26.095 ++ SPDK_RUN_UBSAN=1 00:02:26.095 ++ SPDK_TEST_RAID=1 00:02:26.095 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.095 ++ RUN_NIGHTLY=0 00:02:26.095 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.095 + [[ -n '' ]] 00:02:26.095 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:26.095 + for M in /var/spdk/build-*-manifest.txt 00:02:26.095 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:26.095 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.095 + for M in /var/spdk/build-*-manifest.txt 00:02:26.095 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:26.095 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.095 + for M in /var/spdk/build-*-manifest.txt 00:02:26.095 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:26.095 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.095 ++ uname 00:02:26.095 + [[ Linux == \L\i\n\u\x ]] 00:02:26.095 + sudo dmesg -T 00:02:26.095 + sudo dmesg --clear 00:02:26.095 + dmesg_pid=5264 00:02:26.095 + sudo dmesg -Tw 00:02:26.095 + [[ Fedora Linux == FreeBSD ]] 00:02:26.095 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.095 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.095 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.095 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.095 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.095 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.095 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.095 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.095 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.095 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.095 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.095 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.095 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.095 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.095 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.095 Test configuration: 00:02:26.095 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.095 SPDK_RUN_ASAN=1 00:02:26.095 SPDK_RUN_UBSAN=1 00:02:26.095 SPDK_TEST_RAID=1 00:02:26.095 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.378 RUN_NIGHTLY=0 16:10:19 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:26.378 16:10:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:26.378 16:10:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.378 16:10:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.378 16:10:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.378 16:10:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.378 16:10:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.378 16:10:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.378 16:10:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.378 16:10:19 -- paths/export.sh@5 -- $ export PATH 00:02:26.378 16:10:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.378 16:10:19 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:26.378 16:10:19 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:26.378 16:10:19 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728403819.XXXXXX 00:02:26.378 16:10:19 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728403819.C6gh1K 00:02:26.378 16:10:19 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:26.378 16:10:19 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:26.378 16:10:19 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:26.378 16:10:19 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:26.378 16:10:19 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.378 16:10:19 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:26.378 16:10:19 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:26.378 16:10:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.378 16:10:19 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:26.378 16:10:19 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:26.378 16:10:19 -- pm/common@17 -- $ local monitor 00:02:26.378 16:10:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.378 16:10:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.378 16:10:19 -- pm/common@25 -- $ sleep 1 00:02:26.378 16:10:19 -- pm/common@21 -- $ date +%s 00:02:26.378 16:10:19 -- pm/common@21 -- $ date +%s 00:02:26.378 16:10:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728403819 00:02:26.378 16:10:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728403819 00:02:26.378 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728403819_collect-vmstat.pm.log 00:02:26.378 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728403819_collect-cpu-load.pm.log 00:02:27.313 16:10:20 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:27.313 16:10:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.313 16:10:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.313 16:10:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.313 16:10:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.313 Tue Oct 8 04:10:20 PM UTC 2024 00:02:27.313 16:10:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.313 v25.01-pre-51-gba5b39cb2 00:02:27.313 16:10:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:27.313 16:10:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:27.313 16:10:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.313 16:10:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.313 16:10:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.313 ************************************ 00:02:27.313 START TEST asan 00:02:27.313 ************************************ 00:02:27.313 using asan 00:02:27.313 16:10:20 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:27.313 00:02:27.313 real 0m0.000s 00:02:27.313 user 0m0.000s 00:02:27.313 sys 0m0.000s 00:02:27.313 16:10:20 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.313 16:10:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.313 ************************************ 00:02:27.313 END TEST asan 00:02:27.313 ************************************ 00:02:27.313 16:10:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.313 16:10:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.313 16:10:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.313 16:10:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.313 16:10:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.313 ************************************ 00:02:27.313 START TEST ubsan 00:02:27.313 ************************************ 00:02:27.313 using ubsan 00:02:27.313 16:10:20 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:27.313 00:02:27.313 real 0m0.000s 00:02:27.313 user 0m0.000s 00:02:27.313 sys 0m0.000s 00:02:27.313 16:10:20 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.313 16:10:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.313 ************************************ 00:02:27.313 END TEST ubsan 00:02:27.313 ************************************ 00:02:27.313 16:10:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:27.313 16:10:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:27.313 16:10:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:27.313 16:10:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:27.313 16:10:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:27.313 16:10:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:27.313 16:10:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:27.313 16:10:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:27.313 16:10:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:27.571 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:27.571 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:27.829 Using 'verbs' RDMA provider 00:02:40.955 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:55.896 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:55.896 Creating mk/config.mk...done. 00:02:55.896 Creating mk/cc.flags.mk...done. 00:02:55.896 Type 'make' to build. 00:02:55.896 16:10:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:55.896 16:10:47 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:55.896 16:10:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:55.896 16:10:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.896 ************************************ 00:02:55.896 START TEST make 00:02:55.896 ************************************ 00:02:55.896 16:10:47 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:55.896 make[1]: Nothing to be done for 'all'. 00:03:10.819 The Meson build system 00:03:10.819 Version: 1.5.0 00:03:10.819 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:10.819 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:10.819 Build type: native build 00:03:10.819 Program cat found: YES (/usr/bin/cat) 00:03:10.819 Project name: DPDK 00:03:10.819 Project version: 24.03.0 00:03:10.819 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:10.819 C linker for the host machine: cc ld.bfd 2.40-14 00:03:10.819 Host machine cpu family: x86_64 00:03:10.819 Host machine cpu: x86_64 00:03:10.819 Message: ## Building in Developer Mode ## 00:03:10.819 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:10.819 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:10.819 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:10.819 Program python3 found: YES (/usr/bin/python3) 00:03:10.819 Program cat found: YES (/usr/bin/cat) 00:03:10.819 Compiler for C supports arguments -march=native: YES 00:03:10.819 Checking for size of "void *" : 8 00:03:10.819 Checking for size of "void *" : 8 (cached) 00:03:10.819 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:10.819 Library m found: YES 00:03:10.819 Library numa found: YES 00:03:10.819 Has header "numaif.h" : YES 00:03:10.819 Library fdt found: NO 00:03:10.819 Library execinfo found: NO 00:03:10.819 Has header "execinfo.h" : YES 00:03:10.819 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:10.819 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:10.819 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:10.819 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:10.819 Run-time dependency openssl found: YES 3.1.1 00:03:10.819 Run-time dependency libpcap found: YES 1.10.4 00:03:10.819 Has header "pcap.h" with dependency libpcap: YES 00:03:10.819 Compiler for C supports arguments -Wcast-qual: YES 00:03:10.819 Compiler for C supports arguments -Wdeprecated: YES 00:03:10.819 Compiler for C supports arguments -Wformat: YES 00:03:10.819 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:10.819 Compiler for C supports arguments -Wformat-security: NO 00:03:10.819 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.819 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:10.819 Compiler for C supports arguments -Wnested-externs: YES 00:03:10.819 Compiler for C supports arguments -Wold-style-definition: YES 00:03:10.819 Compiler for C supports arguments -Wpointer-arith: YES 00:03:10.819 Compiler for C supports arguments -Wsign-compare: YES 00:03:10.819 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:10.819 Compiler for C supports arguments -Wundef: YES 00:03:10.819 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.819 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:10.819 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:10.819 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.819 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:10.819 Program objdump found: YES (/usr/bin/objdump) 00:03:10.819 Compiler for C supports arguments -mavx512f: YES 00:03:10.819 Checking if "AVX512 checking" compiles: YES 00:03:10.819 Fetching value of define "__SSE4_2__" : 1 00:03:10.819 Fetching value of define "__AES__" : 1 00:03:10.819 Fetching value of define "__AVX__" : 1 00:03:10.819 Fetching value of define "__AVX2__" : 1 00:03:10.819 Fetching value of define "__AVX512BW__" : (undefined) 00:03:10.819 Fetching value of define "__AVX512CD__" : (undefined) 00:03:10.819 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:10.819 Fetching value of define "__AVX512F__" : (undefined) 00:03:10.819 Fetching value of define "__AVX512VL__" : (undefined) 00:03:10.819 Fetching value of define "__PCLMUL__" : 1 00:03:10.819 Fetching value of define "__RDRND__" : 1 00:03:10.819 Fetching value of define "__RDSEED__" : 1 00:03:10.819 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:10.819 Fetching value of define "__znver1__" : (undefined) 00:03:10.819 Fetching value of define "__znver2__" : (undefined) 00:03:10.819 Fetching value of define "__znver3__" : (undefined) 00:03:10.819 Fetching value of define "__znver4__" : (undefined) 00:03:10.819 Library asan found: YES 00:03:10.819 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:10.819 Message: lib/log: Defining dependency "log" 00:03:10.819 Message: lib/kvargs: Defining dependency "kvargs" 00:03:10.819 Message: lib/telemetry: Defining dependency "telemetry" 00:03:10.819 Library rt found: YES 00:03:10.819 Checking for function "getentropy" : NO 00:03:10.819 Message: lib/eal: Defining dependency "eal" 00:03:10.819 Message: lib/ring: Defining dependency "ring" 00:03:10.819 Message: lib/rcu: Defining dependency "rcu" 00:03:10.819 Message: lib/mempool: Defining dependency "mempool" 00:03:10.819 Message: lib/mbuf: Defining dependency "mbuf" 00:03:10.819 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:10.819 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:10.819 Compiler for C supports arguments -mpclmul: YES 00:03:10.819 Compiler for C supports arguments -maes: YES 00:03:10.819 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.819 Compiler for C supports arguments -mavx512bw: YES 00:03:10.819 Compiler for C supports arguments -mavx512dq: YES 00:03:10.819 Compiler for C supports arguments -mavx512vl: YES 00:03:10.819 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:10.819 Compiler for C supports arguments -mavx2: YES 00:03:10.819 Compiler for C supports arguments -mavx: YES 00:03:10.819 Message: lib/net: Defining dependency "net" 00:03:10.819 Message: lib/meter: Defining dependency "meter" 00:03:10.819 Message: lib/ethdev: Defining dependency "ethdev" 00:03:10.819 Message: lib/pci: Defining dependency "pci" 00:03:10.819 Message: lib/cmdline: Defining dependency "cmdline" 00:03:10.819 Message: lib/hash: Defining dependency "hash" 00:03:10.819 Message: lib/timer: Defining dependency "timer" 00:03:10.819 Message: lib/compressdev: Defining dependency "compressdev" 00:03:10.819 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:10.819 Message: lib/dmadev: Defining dependency "dmadev" 00:03:10.819 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:10.819 Message: lib/power: Defining dependency "power" 00:03:10.819 Message: lib/reorder: Defining dependency "reorder" 00:03:10.819 Message: lib/security: Defining dependency "security" 00:03:10.819 Has header "linux/userfaultfd.h" : YES 00:03:10.819 Has header "linux/vduse.h" : YES 00:03:10.819 Message: lib/vhost: Defining dependency "vhost" 00:03:10.819 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:10.819 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:10.819 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:10.819 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:10.819 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:10.819 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:10.819 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:10.819 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:10.819 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:10.819 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:10.819 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:10.819 Configuring doxy-api-html.conf using configuration 00:03:10.819 Configuring doxy-api-man.conf using configuration 00:03:10.819 Program mandb found: YES (/usr/bin/mandb) 00:03:10.819 Program sphinx-build found: NO 00:03:10.819 Configuring rte_build_config.h using configuration 00:03:10.819 Message: 00:03:10.819 ================= 00:03:10.819 Applications Enabled 00:03:10.819 ================= 00:03:10.819 00:03:10.819 apps: 00:03:10.819 00:03:10.819 00:03:10.819 Message: 00:03:10.819 ================= 00:03:10.819 Libraries Enabled 00:03:10.819 ================= 00:03:10.819 00:03:10.819 libs: 00:03:10.819 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:10.819 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:10.819 cryptodev, dmadev, power, reorder, security, vhost, 00:03:10.819 00:03:10.819 Message: 00:03:10.819 =============== 00:03:10.819 Drivers Enabled 00:03:10.819 =============== 00:03:10.819 00:03:10.819 common: 00:03:10.819 00:03:10.819 bus: 00:03:10.819 pci, vdev, 00:03:10.819 mempool: 00:03:10.819 ring, 00:03:10.819 dma: 00:03:10.819 00:03:10.819 net: 00:03:10.819 00:03:10.819 crypto: 00:03:10.819 00:03:10.819 compress: 00:03:10.819 00:03:10.819 vdpa: 00:03:10.819 00:03:10.819 00:03:10.819 Message: 00:03:10.819 ================= 00:03:10.819 Content Skipped 00:03:10.819 ================= 00:03:10.819 00:03:10.819 apps: 00:03:10.819 dumpcap: explicitly disabled via build config 00:03:10.820 graph: explicitly disabled via build config 00:03:10.820 pdump: explicitly disabled via build config 00:03:10.820 proc-info: explicitly disabled via build config 00:03:10.820 test-acl: explicitly disabled via build config 00:03:10.820 test-bbdev: explicitly disabled via build config 00:03:10.820 test-cmdline: explicitly disabled via build config 00:03:10.820 test-compress-perf: explicitly disabled via build config 00:03:10.820 test-crypto-perf: explicitly disabled via build config 00:03:10.820 test-dma-perf: explicitly disabled via build config 00:03:10.820 test-eventdev: explicitly disabled via build config 00:03:10.820 test-fib: explicitly disabled via build config 00:03:10.820 test-flow-perf: explicitly disabled via build config 00:03:10.820 test-gpudev: explicitly disabled via build config 00:03:10.820 test-mldev: explicitly disabled via build config 00:03:10.820 test-pipeline: explicitly disabled via build config 00:03:10.820 test-pmd: explicitly disabled via build config 00:03:10.820 test-regex: explicitly disabled via build config 00:03:10.820 test-sad: explicitly disabled via build config 00:03:10.820 test-security-perf: explicitly disabled via build config 00:03:10.820 00:03:10.820 libs: 00:03:10.820 argparse: explicitly disabled via build config 00:03:10.820 metrics: explicitly disabled via build config 00:03:10.820 acl: explicitly disabled via build config 00:03:10.820 bbdev: explicitly disabled via build config 00:03:10.820 bitratestats: explicitly disabled via build config 00:03:10.820 bpf: explicitly disabled via build config 00:03:10.820 cfgfile: explicitly disabled via build config 00:03:10.820 distributor: explicitly disabled via build config 00:03:10.820 efd: explicitly disabled via build config 00:03:10.820 eventdev: explicitly disabled via build config 00:03:10.820 dispatcher: explicitly disabled via build config 00:03:10.820 gpudev: explicitly disabled via build config 00:03:10.820 gro: explicitly disabled via build config 00:03:10.820 gso: explicitly disabled via build config 00:03:10.820 ip_frag: explicitly disabled via build config 00:03:10.820 jobstats: explicitly disabled via build config 00:03:10.820 latencystats: explicitly disabled via build config 00:03:10.820 lpm: explicitly disabled via build config 00:03:10.820 member: explicitly disabled via build config 00:03:10.820 pcapng: explicitly disabled via build config 00:03:10.820 rawdev: explicitly disabled via build config 00:03:10.820 regexdev: explicitly disabled via build config 00:03:10.820 mldev: explicitly disabled via build config 00:03:10.820 rib: explicitly disabled via build config 00:03:10.820 sched: explicitly disabled via build config 00:03:10.820 stack: explicitly disabled via build config 00:03:10.820 ipsec: explicitly disabled via build config 00:03:10.820 pdcp: explicitly disabled via build config 00:03:10.820 fib: explicitly disabled via build config 00:03:10.820 port: explicitly disabled via build config 00:03:10.820 pdump: explicitly disabled via build config 00:03:10.820 table: explicitly disabled via build config 00:03:10.820 pipeline: explicitly disabled via build config 00:03:10.820 graph: explicitly disabled via build config 00:03:10.820 node: explicitly disabled via build config 00:03:10.820 00:03:10.820 drivers: 00:03:10.820 common/cpt: not in enabled drivers build config 00:03:10.820 common/dpaax: not in enabled drivers build config 00:03:10.820 common/iavf: not in enabled drivers build config 00:03:10.820 common/idpf: not in enabled drivers build config 00:03:10.820 common/ionic: not in enabled drivers build config 00:03:10.820 common/mvep: not in enabled drivers build config 00:03:10.820 common/octeontx: not in enabled drivers build config 00:03:10.820 bus/auxiliary: not in enabled drivers build config 00:03:10.820 bus/cdx: not in enabled drivers build config 00:03:10.820 bus/dpaa: not in enabled drivers build config 00:03:10.820 bus/fslmc: not in enabled drivers build config 00:03:10.820 bus/ifpga: not in enabled drivers build config 00:03:10.820 bus/platform: not in enabled drivers build config 00:03:10.820 bus/uacce: not in enabled drivers build config 00:03:10.820 bus/vmbus: not in enabled drivers build config 00:03:10.820 common/cnxk: not in enabled drivers build config 00:03:10.820 common/mlx5: not in enabled drivers build config 00:03:10.820 common/nfp: not in enabled drivers build config 00:03:10.820 common/nitrox: not in enabled drivers build config 00:03:10.820 common/qat: not in enabled drivers build config 00:03:10.820 common/sfc_efx: not in enabled drivers build config 00:03:10.820 mempool/bucket: not in enabled drivers build config 00:03:10.820 mempool/cnxk: not in enabled drivers build config 00:03:10.820 mempool/dpaa: not in enabled drivers build config 00:03:10.820 mempool/dpaa2: not in enabled drivers build config 00:03:10.820 mempool/octeontx: not in enabled drivers build config 00:03:10.820 mempool/stack: not in enabled drivers build config 00:03:10.820 dma/cnxk: not in enabled drivers build config 00:03:10.820 dma/dpaa: not in enabled drivers build config 00:03:10.820 dma/dpaa2: not in enabled drivers build config 00:03:10.820 dma/hisilicon: not in enabled drivers build config 00:03:10.820 dma/idxd: not in enabled drivers build config 00:03:10.820 dma/ioat: not in enabled drivers build config 00:03:10.820 dma/skeleton: not in enabled drivers build config 00:03:10.820 net/af_packet: not in enabled drivers build config 00:03:10.820 net/af_xdp: not in enabled drivers build config 00:03:10.820 net/ark: not in enabled drivers build config 00:03:10.820 net/atlantic: not in enabled drivers build config 00:03:10.820 net/avp: not in enabled drivers build config 00:03:10.820 net/axgbe: not in enabled drivers build config 00:03:10.820 net/bnx2x: not in enabled drivers build config 00:03:10.820 net/bnxt: not in enabled drivers build config 00:03:10.820 net/bonding: not in enabled drivers build config 00:03:10.820 net/cnxk: not in enabled drivers build config 00:03:10.820 net/cpfl: not in enabled drivers build config 00:03:10.820 net/cxgbe: not in enabled drivers build config 00:03:10.820 net/dpaa: not in enabled drivers build config 00:03:10.820 net/dpaa2: not in enabled drivers build config 00:03:10.820 net/e1000: not in enabled drivers build config 00:03:10.820 net/ena: not in enabled drivers build config 00:03:10.820 net/enetc: not in enabled drivers build config 00:03:10.820 net/enetfec: not in enabled drivers build config 00:03:10.820 net/enic: not in enabled drivers build config 00:03:10.820 net/failsafe: not in enabled drivers build config 00:03:10.820 net/fm10k: not in enabled drivers build config 00:03:10.820 net/gve: not in enabled drivers build config 00:03:10.820 net/hinic: not in enabled drivers build config 00:03:10.820 net/hns3: not in enabled drivers build config 00:03:10.820 net/i40e: not in enabled drivers build config 00:03:10.820 net/iavf: not in enabled drivers build config 00:03:10.820 net/ice: not in enabled drivers build config 00:03:10.820 net/idpf: not in enabled drivers build config 00:03:10.820 net/igc: not in enabled drivers build config 00:03:10.820 net/ionic: not in enabled drivers build config 00:03:10.820 net/ipn3ke: not in enabled drivers build config 00:03:10.820 net/ixgbe: not in enabled drivers build config 00:03:10.820 net/mana: not in enabled drivers build config 00:03:10.820 net/memif: not in enabled drivers build config 00:03:10.820 net/mlx4: not in enabled drivers build config 00:03:10.820 net/mlx5: not in enabled drivers build config 00:03:10.820 net/mvneta: not in enabled drivers build config 00:03:10.820 net/mvpp2: not in enabled drivers build config 00:03:10.820 net/netvsc: not in enabled drivers build config 00:03:10.820 net/nfb: not in enabled drivers build config 00:03:10.820 net/nfp: not in enabled drivers build config 00:03:10.820 net/ngbe: not in enabled drivers build config 00:03:10.820 net/null: not in enabled drivers build config 00:03:10.820 net/octeontx: not in enabled drivers build config 00:03:10.820 net/octeon_ep: not in enabled drivers build config 00:03:10.820 net/pcap: not in enabled drivers build config 00:03:10.820 net/pfe: not in enabled drivers build config 00:03:10.820 net/qede: not in enabled drivers build config 00:03:10.820 net/ring: not in enabled drivers build config 00:03:10.820 net/sfc: not in enabled drivers build config 00:03:10.820 net/softnic: not in enabled drivers build config 00:03:10.820 net/tap: not in enabled drivers build config 00:03:10.820 net/thunderx: not in enabled drivers build config 00:03:10.820 net/txgbe: not in enabled drivers build config 00:03:10.820 net/vdev_netvsc: not in enabled drivers build config 00:03:10.820 net/vhost: not in enabled drivers build config 00:03:10.820 net/virtio: not in enabled drivers build config 00:03:10.820 net/vmxnet3: not in enabled drivers build config 00:03:10.820 raw/*: missing internal dependency, "rawdev" 00:03:10.820 crypto/armv8: not in enabled drivers build config 00:03:10.820 crypto/bcmfs: not in enabled drivers build config 00:03:10.820 crypto/caam_jr: not in enabled drivers build config 00:03:10.820 crypto/ccp: not in enabled drivers build config 00:03:10.820 crypto/cnxk: not in enabled drivers build config 00:03:10.820 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.820 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.820 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.820 crypto/mlx5: not in enabled drivers build config 00:03:10.820 crypto/mvsam: not in enabled drivers build config 00:03:10.820 crypto/nitrox: not in enabled drivers build config 00:03:10.820 crypto/null: not in enabled drivers build config 00:03:10.820 crypto/octeontx: not in enabled drivers build config 00:03:10.820 crypto/openssl: not in enabled drivers build config 00:03:10.820 crypto/scheduler: not in enabled drivers build config 00:03:10.820 crypto/uadk: not in enabled drivers build config 00:03:10.820 crypto/virtio: not in enabled drivers build config 00:03:10.820 compress/isal: not in enabled drivers build config 00:03:10.820 compress/mlx5: not in enabled drivers build config 00:03:10.820 compress/nitrox: not in enabled drivers build config 00:03:10.820 compress/octeontx: not in enabled drivers build config 00:03:10.820 compress/zlib: not in enabled drivers build config 00:03:10.820 regex/*: missing internal dependency, "regexdev" 00:03:10.820 ml/*: missing internal dependency, "mldev" 00:03:10.820 vdpa/ifc: not in enabled drivers build config 00:03:10.820 vdpa/mlx5: not in enabled drivers build config 00:03:10.820 vdpa/nfp: not in enabled drivers build config 00:03:10.820 vdpa/sfc: not in enabled drivers build config 00:03:10.820 event/*: missing internal dependency, "eventdev" 00:03:10.820 baseband/*: missing internal dependency, "bbdev" 00:03:10.820 gpu/*: missing internal dependency, "gpudev" 00:03:10.820 00:03:10.820 00:03:10.820 Build targets in project: 85 00:03:10.820 00:03:10.820 DPDK 24.03.0 00:03:10.820 00:03:10.820 User defined options 00:03:10.820 buildtype : debug 00:03:10.820 default_library : shared 00:03:10.820 libdir : lib 00:03:10.820 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:10.820 b_sanitize : address 00:03:10.821 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:10.821 c_link_args : 00:03:10.821 cpu_instruction_set: native 00:03:10.821 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:10.821 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:10.821 enable_docs : false 00:03:10.821 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:10.821 enable_kmods : false 00:03:10.821 max_lcores : 128 00:03:10.821 tests : false 00:03:10.821 00:03:10.821 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.821 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:10.821 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:10.821 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:10.821 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:10.821 [4/268] Linking static target lib/librte_log.a 00:03:10.821 [5/268] Linking static target lib/librte_kvargs.a 00:03:10.821 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:10.821 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.821 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:11.079 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.079 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:11.079 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.079 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:11.079 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:11.079 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:11.337 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:11.337 [16/268] Linking static target lib/librte_telemetry.a 00:03:11.337 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:11.337 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.595 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:11.595 [20/268] Linking target lib/librte_log.so.24.1 00:03:11.853 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:11.853 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:12.110 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:12.110 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:12.110 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:12.110 [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.111 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:12.111 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:12.368 [29/268] Linking target lib/librte_telemetry.so.24.1 00:03:12.368 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:12.368 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:12.368 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:12.368 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:12.626 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:12.884 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:12.884 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:12.884 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:13.141 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:13.141 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:13.141 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:13.399 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:13.399 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:13.399 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:13.399 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:13.657 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:13.915 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:13.915 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:13.915 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:13.915 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:13.915 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.172 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.172 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:14.430 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:14.688 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:14.948 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:14.948 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:14.948 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:14.948 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:15.212 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:15.212 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:15.212 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.212 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:15.472 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:15.472 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:15.472 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:15.730 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:15.730 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:15.730 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:15.988 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:15.988 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:15.988 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:15.988 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:16.246 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:16.246 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:16.246 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:16.505 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:16.505 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:16.505 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:16.505 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:16.763 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:16.763 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:16.763 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:16.763 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:16.763 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:17.020 [85/268] Linking static target lib/librte_eal.a 00:03:17.020 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:17.278 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:17.278 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.278 [89/268] Linking static target lib/librte_ring.a 00:03:17.535 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:17.535 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:17.793 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:17.793 [93/268] Linking static target lib/librte_mempool.a 00:03:17.793 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:17.793 [95/268] Linking static target lib/librte_rcu.a 00:03:17.793 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:17.793 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:18.051 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:18.051 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.051 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:18.309 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.567 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.567 [103/268] Linking static target lib/librte_mbuf.a 00:03:18.567 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:18.567 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:18.825 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:18.825 [107/268] Linking static target lib/librte_meter.a 00:03:18.826 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:18.826 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:19.083 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:19.083 [111/268] Linking static target lib/librte_net.a 00:03:19.341 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.341 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.341 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:19.341 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:19.599 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:19.599 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.599 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.165 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.423 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:20.423 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.423 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:20.680 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:20.938 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:21.196 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:21.196 [126/268] Linking static target lib/librte_pci.a 00:03:21.196 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:21.453 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:21.453 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:21.453 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:21.453 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:21.712 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:21.712 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.712 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:21.712 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:21.712 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:21.712 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:21.712 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:21.712 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:21.712 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:21.969 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:21.969 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:21.969 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:21.969 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:21.969 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:22.228 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:22.486 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:22.486 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:22.744 [149/268] Linking static target lib/librte_cmdline.a 00:03:22.744 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:22.744 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:22.744 [152/268] Linking static target lib/librte_timer.a 00:03:22.744 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:23.309 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:23.309 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:23.566 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:23.566 [157/268] Linking static target lib/librte_hash.a 00:03:23.566 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:23.566 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.824 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:23.824 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:23.824 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:23.824 [163/268] Linking static target lib/librte_compressdev.a 00:03:24.082 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:24.082 [165/268] Linking static target lib/librte_ethdev.a 00:03:24.340 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:24.340 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:24.340 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:24.612 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:24.612 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.612 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:24.897 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.897 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.897 [174/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:25.155 [175/268] Linking static target lib/librte_dmadev.a 00:03:25.155 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:25.417 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:25.417 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:25.417 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:25.675 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:25.932 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:25.932 [182/268] Linking static target lib/librte_cryptodev.a 00:03:25.932 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:26.190 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:26.190 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:26.190 [186/268] Linking static target lib/librte_reorder.a 00:03:26.190 [187/268] Linking static target lib/librte_power.a 00:03:26.190 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.448 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:26.705 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:27.316 [191/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.316 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:27.316 [193/268] Linking static target lib/librte_security.a 00:03:27.575 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:28.140 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.140 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:28.706 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.964 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:28.964 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:29.225 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:29.484 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.484 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:29.484 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:29.742 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:29.742 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:29.999 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:29.999 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:30.257 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:30.257 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:30.823 [210/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:30.823 [211/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:30.823 [212/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:30.823 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:30.823 [214/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:30.823 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:30.823 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:30.823 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:31.080 [218/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:31.080 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:31.080 [220/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:31.080 [221/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:31.080 [222/268] Linking static target drivers/librte_mempool_ring.a 00:03:31.080 [223/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:31.080 [224/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:31.080 [225/268] Linking static target drivers/librte_bus_pci.a 00:03:31.080 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.646 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.646 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:31.904 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.161 [230/268] Linking target lib/librte_eal.so.24.1 00:03:32.161 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:32.161 [232/268] Linking target lib/librte_meter.so.24.1 00:03:32.161 [233/268] Linking target lib/librte_timer.so.24.1 00:03:32.161 [234/268] Linking target lib/librte_pci.so.24.1 00:03:32.420 [235/268] Linking target lib/librte_ring.so.24.1 00:03:32.420 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:32.420 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:32.420 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:32.420 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:32.420 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:32.420 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:32.420 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:32.678 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:32.678 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:32.678 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:32.678 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:32.678 [247/268] Linking target lib/librte_mbuf.so.24.1 00:03:32.936 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:32.936 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:32.936 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:32.936 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:32.936 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:32.936 [253/268] Linking target lib/librte_net.so.24.1 00:03:33.194 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:33.194 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:33.194 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:33.194 [257/268] Linking target lib/librte_hash.so.24.1 00:03:33.451 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:33.451 [259/268] Linking target lib/librte_security.so.24.1 00:03:33.451 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:34.018 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.276 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:34.276 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:34.533 [264/268] Linking target lib/librte_power.so.24.1 00:03:35.932 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:35.932 [266/268] Linking static target lib/librte_vhost.a 00:03:37.306 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.565 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:37.565 INFO: autodetecting backend as ninja 00:03:37.565 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:04.164 CC lib/log/log.o 00:04:04.164 CC lib/log/log_flags.o 00:04:04.164 CC lib/log/log_deprecated.o 00:04:04.164 CC lib/ut_mock/mock.o 00:04:04.164 CC lib/ut/ut.o 00:04:04.164 LIB libspdk_ut.a 00:04:04.164 LIB libspdk_log.a 00:04:04.164 SO libspdk_ut.so.2.0 00:04:04.164 LIB libspdk_ut_mock.a 00:04:04.164 SO libspdk_log.so.7.0 00:04:04.164 SO libspdk_ut_mock.so.6.0 00:04:04.164 SYMLINK libspdk_log.so 00:04:04.164 SYMLINK libspdk_ut.so 00:04:04.164 SYMLINK libspdk_ut_mock.so 00:04:04.164 CC lib/dma/dma.o 00:04:04.164 CXX lib/trace_parser/trace.o 00:04:04.164 CC lib/ioat/ioat.o 00:04:04.164 CC lib/util/base64.o 00:04:04.164 CC lib/util/cpuset.o 00:04:04.164 CC lib/util/bit_array.o 00:04:04.164 CC lib/util/crc16.o 00:04:04.164 CC lib/util/crc32.o 00:04:04.164 CC lib/util/crc32c.o 00:04:04.164 CC lib/vfio_user/host/vfio_user_pci.o 00:04:04.164 CC lib/util/crc32_ieee.o 00:04:04.164 CC lib/util/crc64.o 00:04:04.164 CC lib/util/dif.o 00:04:04.164 CC lib/vfio_user/host/vfio_user.o 00:04:04.164 LIB libspdk_dma.a 00:04:04.164 SO libspdk_dma.so.5.0 00:04:04.164 CC lib/util/fd.o 00:04:04.164 CC lib/util/fd_group.o 00:04:04.164 LIB libspdk_ioat.a 00:04:04.164 CC lib/util/file.o 00:04:04.164 SO libspdk_ioat.so.7.0 00:04:04.164 SYMLINK libspdk_dma.so 00:04:04.164 CC lib/util/hexlify.o 00:04:04.164 CC lib/util/iov.o 00:04:04.164 SYMLINK libspdk_ioat.so 00:04:04.164 CC lib/util/math.o 00:04:04.164 CC lib/util/net.o 00:04:04.164 CC lib/util/pipe.o 00:04:04.164 CC lib/util/strerror_tls.o 00:04:04.164 CC lib/util/string.o 00:04:04.164 CC lib/util/uuid.o 00:04:04.164 LIB libspdk_vfio_user.a 00:04:04.164 CC lib/util/xor.o 00:04:04.164 CC lib/util/zipf.o 00:04:04.164 SO libspdk_vfio_user.so.5.0 00:04:04.164 CC lib/util/md5.o 00:04:04.164 SYMLINK libspdk_vfio_user.so 00:04:04.164 LIB libspdk_util.a 00:04:04.164 SO libspdk_util.so.10.1 00:04:04.164 LIB libspdk_trace_parser.a 00:04:04.164 SYMLINK libspdk_util.so 00:04:04.164 SO libspdk_trace_parser.so.6.0 00:04:04.164 SYMLINK libspdk_trace_parser.so 00:04:04.164 CC lib/rdma_utils/rdma_utils.o 00:04:04.164 CC lib/rdma_provider/common.o 00:04:04.164 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:04.164 CC lib/vmd/vmd.o 00:04:04.164 CC lib/vmd/led.o 00:04:04.164 CC lib/idxd/idxd.o 00:04:04.164 CC lib/idxd/idxd_user.o 00:04:04.164 CC lib/json/json_parse.o 00:04:04.164 CC lib/conf/conf.o 00:04:04.164 CC lib/env_dpdk/env.o 00:04:04.164 CC lib/env_dpdk/memory.o 00:04:04.164 CC lib/env_dpdk/pci.o 00:04:04.164 LIB libspdk_rdma_provider.a 00:04:04.164 SO libspdk_rdma_provider.so.6.0 00:04:04.164 LIB libspdk_conf.a 00:04:04.164 LIB libspdk_rdma_utils.a 00:04:04.164 CC lib/json/json_util.o 00:04:04.164 SO libspdk_rdma_utils.so.1.0 00:04:04.164 SO libspdk_conf.so.6.0 00:04:04.164 SYMLINK libspdk_rdma_provider.so 00:04:04.164 CC lib/json/json_write.o 00:04:04.164 CC lib/env_dpdk/init.o 00:04:04.164 SYMLINK libspdk_rdma_utils.so 00:04:04.164 CC lib/idxd/idxd_kernel.o 00:04:04.164 SYMLINK libspdk_conf.so 00:04:04.164 CC lib/env_dpdk/threads.o 00:04:04.164 CC lib/env_dpdk/pci_ioat.o 00:04:04.164 CC lib/env_dpdk/pci_virtio.o 00:04:04.164 CC lib/env_dpdk/pci_vmd.o 00:04:04.164 CC lib/env_dpdk/pci_idxd.o 00:04:04.164 CC lib/env_dpdk/pci_event.o 00:04:04.164 LIB libspdk_idxd.a 00:04:04.164 LIB libspdk_json.a 00:04:04.164 LIB libspdk_vmd.a 00:04:04.164 SO libspdk_idxd.so.12.1 00:04:04.164 CC lib/env_dpdk/sigbus_handler.o 00:04:04.164 CC lib/env_dpdk/pci_dpdk.o 00:04:04.164 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:04.164 SO libspdk_json.so.6.0 00:04:04.164 SO libspdk_vmd.so.6.0 00:04:04.164 SYMLINK libspdk_idxd.so 00:04:04.164 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:04.164 SYMLINK libspdk_json.so 00:04:04.164 SYMLINK libspdk_vmd.so 00:04:04.164 CC lib/jsonrpc/jsonrpc_server.o 00:04:04.164 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:04.164 CC lib/jsonrpc/jsonrpc_client.o 00:04:04.164 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:04.164 LIB libspdk_jsonrpc.a 00:04:04.164 SO libspdk_jsonrpc.so.6.0 00:04:04.164 SYMLINK libspdk_jsonrpc.so 00:04:04.164 CC lib/rpc/rpc.o 00:04:04.164 LIB libspdk_env_dpdk.a 00:04:04.164 SO libspdk_env_dpdk.so.15.1 00:04:04.164 LIB libspdk_rpc.a 00:04:04.164 SO libspdk_rpc.so.6.0 00:04:04.164 SYMLINK libspdk_env_dpdk.so 00:04:04.164 SYMLINK libspdk_rpc.so 00:04:04.422 CC lib/trace/trace.o 00:04:04.422 CC lib/trace/trace_rpc.o 00:04:04.422 CC lib/trace/trace_flags.o 00:04:04.422 CC lib/keyring/keyring.o 00:04:04.422 CC lib/notify/notify.o 00:04:04.422 CC lib/keyring/keyring_rpc.o 00:04:04.422 CC lib/notify/notify_rpc.o 00:04:04.681 LIB libspdk_notify.a 00:04:04.681 SO libspdk_notify.so.6.0 00:04:04.681 SYMLINK libspdk_notify.so 00:04:04.681 LIB libspdk_keyring.a 00:04:04.681 LIB libspdk_trace.a 00:04:04.939 SO libspdk_keyring.so.2.0 00:04:04.939 SO libspdk_trace.so.11.0 00:04:04.939 SYMLINK libspdk_keyring.so 00:04:04.939 SYMLINK libspdk_trace.so 00:04:05.199 CC lib/sock/sock_rpc.o 00:04:05.199 CC lib/sock/sock.o 00:04:05.199 CC lib/thread/thread.o 00:04:05.199 CC lib/thread/iobuf.o 00:04:05.767 LIB libspdk_sock.a 00:04:05.767 SO libspdk_sock.so.10.0 00:04:06.025 SYMLINK libspdk_sock.so 00:04:06.315 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:06.315 CC lib/nvme/nvme_fabric.o 00:04:06.315 CC lib/nvme/nvme_ctrlr.o 00:04:06.315 CC lib/nvme/nvme_ns_cmd.o 00:04:06.315 CC lib/nvme/nvme_ns.o 00:04:06.315 CC lib/nvme/nvme_pcie.o 00:04:06.315 CC lib/nvme/nvme_qpair.o 00:04:06.315 CC lib/nvme/nvme_pcie_common.o 00:04:06.315 CC lib/nvme/nvme.o 00:04:07.249 CC lib/nvme/nvme_quirks.o 00:04:07.249 CC lib/nvme/nvme_transport.o 00:04:07.249 CC lib/nvme/nvme_discovery.o 00:04:07.507 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:07.507 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:07.765 CC lib/nvme/nvme_tcp.o 00:04:07.765 CC lib/nvme/nvme_opal.o 00:04:07.765 CC lib/nvme/nvme_io_msg.o 00:04:07.765 CC lib/nvme/nvme_poll_group.o 00:04:08.022 CC lib/nvme/nvme_zns.o 00:04:08.022 CC lib/nvme/nvme_stubs.o 00:04:08.281 CC lib/nvme/nvme_auth.o 00:04:08.281 CC lib/nvme/nvme_cuse.o 00:04:08.539 CC lib/nvme/nvme_rdma.o 00:04:08.539 LIB libspdk_thread.a 00:04:08.539 SO libspdk_thread.so.10.2 00:04:08.797 SYMLINK libspdk_thread.so 00:04:09.054 CC lib/accel/accel.o 00:04:09.054 CC lib/accel/accel_rpc.o 00:04:09.054 CC lib/blob/blobstore.o 00:04:09.054 CC lib/init/json_config.o 00:04:09.054 CC lib/virtio/virtio.o 00:04:09.311 CC lib/virtio/virtio_vhost_user.o 00:04:09.311 CC lib/accel/accel_sw.o 00:04:09.311 CC lib/init/subsystem.o 00:04:09.311 CC lib/init/subsystem_rpc.o 00:04:09.569 CC lib/init/rpc.o 00:04:09.569 CC lib/blob/request.o 00:04:09.569 CC lib/blob/zeroes.o 00:04:09.569 CC lib/virtio/virtio_vfio_user.o 00:04:09.827 LIB libspdk_init.a 00:04:09.827 CC lib/blob/blob_bs_dev.o 00:04:09.827 SO libspdk_init.so.6.0 00:04:09.827 CC lib/virtio/virtio_pci.o 00:04:09.827 SYMLINK libspdk_init.so 00:04:10.085 CC lib/fsdev/fsdev.o 00:04:10.085 CC lib/fsdev/fsdev_io.o 00:04:10.085 CC lib/fsdev/fsdev_rpc.o 00:04:10.085 CC lib/event/app.o 00:04:10.085 CC lib/event/reactor.o 00:04:10.085 CC lib/event/log_rpc.o 00:04:10.344 LIB libspdk_virtio.a 00:04:10.344 CC lib/event/app_rpc.o 00:04:10.344 SO libspdk_virtio.so.7.0 00:04:10.344 LIB libspdk_nvme.a 00:04:10.344 SYMLINK libspdk_virtio.so 00:04:10.344 CC lib/event/scheduler_static.o 00:04:10.602 LIB libspdk_accel.a 00:04:10.602 SO libspdk_nvme.so.15.0 00:04:10.602 SO libspdk_accel.so.16.0 00:04:10.602 SYMLINK libspdk_accel.so 00:04:10.602 LIB libspdk_event.a 00:04:10.860 SO libspdk_event.so.15.0 00:04:10.860 LIB libspdk_fsdev.a 00:04:10.860 SYMLINK libspdk_event.so 00:04:10.860 SYMLINK libspdk_nvme.so 00:04:10.860 SO libspdk_fsdev.so.1.0 00:04:10.860 CC lib/bdev/bdev_rpc.o 00:04:10.860 CC lib/bdev/bdev_zone.o 00:04:10.860 CC lib/bdev/bdev.o 00:04:10.860 CC lib/bdev/part.o 00:04:10.860 CC lib/bdev/scsi_nvme.o 00:04:10.860 SYMLINK libspdk_fsdev.so 00:04:11.117 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:12.069 LIB libspdk_fuse_dispatcher.a 00:04:12.069 SO libspdk_fuse_dispatcher.so.1.0 00:04:12.337 SYMLINK libspdk_fuse_dispatcher.so 00:04:13.713 LIB libspdk_blob.a 00:04:13.971 SO libspdk_blob.so.11.0 00:04:13.971 SYMLINK libspdk_blob.so 00:04:14.228 CC lib/blobfs/blobfs.o 00:04:14.228 CC lib/blobfs/tree.o 00:04:14.228 CC lib/lvol/lvol.o 00:04:14.486 LIB libspdk_bdev.a 00:04:14.744 SO libspdk_bdev.so.17.0 00:04:15.002 SYMLINK libspdk_bdev.so 00:04:15.002 CC lib/nvmf/ctrlr.o 00:04:15.002 CC lib/nvmf/ctrlr_discovery.o 00:04:15.002 CC lib/ublk/ublk.o 00:04:15.002 CC lib/nvmf/ctrlr_bdev.o 00:04:15.002 CC lib/ublk/ublk_rpc.o 00:04:15.002 CC lib/nbd/nbd.o 00:04:15.002 CC lib/scsi/dev.o 00:04:15.002 CC lib/ftl/ftl_core.o 00:04:15.260 CC lib/scsi/lun.o 00:04:15.517 CC lib/scsi/port.o 00:04:15.517 LIB libspdk_blobfs.a 00:04:15.517 CC lib/scsi/scsi.o 00:04:15.517 SO libspdk_blobfs.so.10.0 00:04:15.517 LIB libspdk_lvol.a 00:04:15.517 CC lib/nbd/nbd_rpc.o 00:04:15.775 SO libspdk_lvol.so.10.0 00:04:15.775 CC lib/ftl/ftl_init.o 00:04:15.775 SYMLINK libspdk_blobfs.so 00:04:15.775 CC lib/ftl/ftl_layout.o 00:04:15.775 CC lib/ftl/ftl_debug.o 00:04:15.775 CC lib/scsi/scsi_bdev.o 00:04:15.775 SYMLINK libspdk_lvol.so 00:04:15.775 CC lib/scsi/scsi_pr.o 00:04:15.775 CC lib/scsi/scsi_rpc.o 00:04:15.775 LIB libspdk_nbd.a 00:04:15.775 SO libspdk_nbd.so.7.0 00:04:16.032 CC lib/nvmf/subsystem.o 00:04:16.032 CC lib/scsi/task.o 00:04:16.032 SYMLINK libspdk_nbd.so 00:04:16.032 CC lib/nvmf/nvmf.o 00:04:16.032 LIB libspdk_ublk.a 00:04:16.032 CC lib/ftl/ftl_io.o 00:04:16.032 SO libspdk_ublk.so.3.0 00:04:16.032 CC lib/nvmf/nvmf_rpc.o 00:04:16.032 SYMLINK libspdk_ublk.so 00:04:16.032 CC lib/ftl/ftl_sb.o 00:04:16.032 CC lib/nvmf/transport.o 00:04:16.290 CC lib/ftl/ftl_l2p.o 00:04:16.290 CC lib/ftl/ftl_l2p_flat.o 00:04:16.290 CC lib/ftl/ftl_nv_cache.o 00:04:16.290 LIB libspdk_scsi.a 00:04:16.547 CC lib/ftl/ftl_band.o 00:04:16.547 CC lib/ftl/ftl_band_ops.o 00:04:16.547 CC lib/nvmf/tcp.o 00:04:16.547 SO libspdk_scsi.so.9.0 00:04:16.547 SYMLINK libspdk_scsi.so 00:04:16.547 CC lib/nvmf/stubs.o 00:04:17.112 CC lib/nvmf/mdns_server.o 00:04:17.112 CC lib/nvmf/rdma.o 00:04:17.112 CC lib/ftl/ftl_writer.o 00:04:17.112 CC lib/iscsi/conn.o 00:04:17.112 CC lib/vhost/vhost.o 00:04:17.370 CC lib/nvmf/auth.o 00:04:17.628 CC lib/ftl/ftl_rq.o 00:04:17.628 CC lib/ftl/ftl_reloc.o 00:04:17.628 CC lib/ftl/ftl_l2p_cache.o 00:04:17.628 CC lib/ftl/ftl_p2l.o 00:04:17.628 CC lib/vhost/vhost_rpc.o 00:04:17.628 CC lib/ftl/ftl_p2l_log.o 00:04:18.194 CC lib/iscsi/init_grp.o 00:04:18.194 CC lib/iscsi/iscsi.o 00:04:18.194 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.194 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.194 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.452 CC lib/iscsi/param.o 00:04:18.452 CC lib/vhost/vhost_scsi.o 00:04:18.452 CC lib/iscsi/portal_grp.o 00:04:18.452 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:18.452 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:18.452 CC lib/vhost/vhost_blk.o 00:04:18.452 CC lib/vhost/rte_vhost_user.o 00:04:18.710 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:18.710 CC lib/iscsi/tgt_node.o 00:04:18.710 CC lib/iscsi/iscsi_subsystem.o 00:04:18.968 CC lib/iscsi/iscsi_rpc.o 00:04:18.968 CC lib/iscsi/task.o 00:04:18.968 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:19.224 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:19.224 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:19.224 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:19.482 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:19.482 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:19.482 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:19.482 CC lib/ftl/utils/ftl_conf.o 00:04:19.482 CC lib/ftl/utils/ftl_md.o 00:04:19.740 CC lib/ftl/utils/ftl_mempool.o 00:04:19.740 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.740 CC lib/ftl/utils/ftl_property.o 00:04:19.740 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.740 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.998 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:19.998 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:19.998 LIB libspdk_vhost.a 00:04:19.998 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:19.998 SO libspdk_vhost.so.8.0 00:04:19.998 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:19.998 LIB libspdk_nvmf.a 00:04:19.998 LIB libspdk_iscsi.a 00:04:19.998 SYMLINK libspdk_vhost.so 00:04:19.998 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:19.998 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:19.998 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:20.260 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:20.260 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:20.260 SO libspdk_iscsi.so.8.0 00:04:20.260 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:20.260 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:20.260 SO libspdk_nvmf.so.19.0 00:04:20.260 CC lib/ftl/base/ftl_base_dev.o 00:04:20.260 CC lib/ftl/base/ftl_base_bdev.o 00:04:20.260 CC lib/ftl/ftl_trace.o 00:04:20.522 SYMLINK libspdk_iscsi.so 00:04:20.522 SYMLINK libspdk_nvmf.so 00:04:20.841 LIB libspdk_ftl.a 00:04:21.099 SO libspdk_ftl.so.9.0 00:04:21.357 SYMLINK libspdk_ftl.so 00:04:21.615 CC module/env_dpdk/env_dpdk_rpc.o 00:04:21.615 CC module/accel/error/accel_error.o 00:04:21.615 CC module/fsdev/aio/fsdev_aio.o 00:04:21.615 CC module/accel/ioat/accel_ioat.o 00:04:21.615 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:21.615 CC module/accel/dsa/accel_dsa.o 00:04:21.615 CC module/accel/iaa/accel_iaa.o 00:04:21.873 CC module/sock/posix/posix.o 00:04:21.873 CC module/keyring/file/keyring.o 00:04:21.873 CC module/blob/bdev/blob_bdev.o 00:04:21.873 LIB libspdk_env_dpdk_rpc.a 00:04:21.873 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.873 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.873 CC module/accel/iaa/accel_iaa_rpc.o 00:04:21.873 CC module/keyring/file/keyring_rpc.o 00:04:21.873 CC module/accel/error/accel_error_rpc.o 00:04:21.873 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.873 LIB libspdk_scheduler_dynamic.a 00:04:21.873 SO libspdk_scheduler_dynamic.so.4.0 00:04:22.131 CC module/accel/dsa/accel_dsa_rpc.o 00:04:22.131 LIB libspdk_accel_iaa.a 00:04:22.131 SYMLINK libspdk_scheduler_dynamic.so 00:04:22.131 LIB libspdk_keyring_file.a 00:04:22.131 SO libspdk_accel_iaa.so.3.0 00:04:22.131 LIB libspdk_blob_bdev.a 00:04:22.131 SO libspdk_keyring_file.so.2.0 00:04:22.131 SO libspdk_blob_bdev.so.11.0 00:04:22.131 LIB libspdk_accel_error.a 00:04:22.131 LIB libspdk_accel_ioat.a 00:04:22.131 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:22.131 SO libspdk_accel_error.so.2.0 00:04:22.131 SYMLINK libspdk_accel_iaa.so 00:04:22.131 LIB libspdk_accel_dsa.a 00:04:22.131 SO libspdk_accel_ioat.so.6.0 00:04:22.131 CC module/fsdev/aio/linux_aio_mgr.o 00:04:22.131 SYMLINK libspdk_blob_bdev.so 00:04:22.131 SYMLINK libspdk_keyring_file.so 00:04:22.131 SO libspdk_accel_dsa.so.5.0 00:04:22.131 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:22.390 SYMLINK libspdk_accel_error.so 00:04:22.390 SYMLINK libspdk_accel_ioat.so 00:04:22.390 SYMLINK libspdk_accel_dsa.so 00:04:22.390 CC module/scheduler/gscheduler/gscheduler.o 00:04:22.390 LIB libspdk_scheduler_dpdk_governor.a 00:04:22.390 CC module/keyring/linux/keyring.o 00:04:22.390 CC module/keyring/linux/keyring_rpc.o 00:04:22.390 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:22.647 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.647 CC module/bdev/error/vbdev_error.o 00:04:22.647 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.647 CC module/bdev/gpt/gpt.o 00:04:22.647 CC module/bdev/delay/vbdev_delay.o 00:04:22.647 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:22.647 LIB libspdk_keyring_linux.a 00:04:22.647 LIB libspdk_scheduler_gscheduler.a 00:04:22.647 LIB libspdk_fsdev_aio.a 00:04:22.647 SO libspdk_keyring_linux.so.1.0 00:04:22.647 SO libspdk_scheduler_gscheduler.so.4.0 00:04:22.647 SO libspdk_fsdev_aio.so.1.0 00:04:22.647 CC module/bdev/lvol/vbdev_lvol.o 00:04:22.647 SYMLINK libspdk_keyring_linux.so 00:04:22.647 LIB libspdk_sock_posix.a 00:04:22.647 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.647 CC module/bdev/error/vbdev_error_rpc.o 00:04:22.647 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:22.904 SYMLINK libspdk_fsdev_aio.so 00:04:22.904 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.904 SO libspdk_sock_posix.so.6.0 00:04:22.904 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.904 SYMLINK libspdk_sock_posix.so 00:04:22.904 LIB libspdk_bdev_error.a 00:04:22.904 LIB libspdk_blobfs_bdev.a 00:04:22.904 CC module/bdev/null/bdev_null.o 00:04:22.904 CC module/bdev/malloc/bdev_malloc.o 00:04:22.904 SO libspdk_bdev_error.so.6.0 00:04:22.904 SO libspdk_blobfs_bdev.so.6.0 00:04:23.160 CC module/bdev/nvme/bdev_nvme.o 00:04:23.160 SYMLINK libspdk_bdev_error.so 00:04:23.160 LIB libspdk_bdev_delay.a 00:04:23.160 CC module/bdev/passthru/vbdev_passthru.o 00:04:23.160 SYMLINK libspdk_blobfs_bdev.so 00:04:23.160 SO libspdk_bdev_delay.so.6.0 00:04:23.160 LIB libspdk_bdev_gpt.a 00:04:23.160 SO libspdk_bdev_gpt.so.6.0 00:04:23.160 SYMLINK libspdk_bdev_delay.so 00:04:23.160 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:23.160 CC module/bdev/raid/bdev_raid.o 00:04:23.160 SYMLINK libspdk_bdev_gpt.so 00:04:23.160 CC module/bdev/raid/bdev_raid_rpc.o 00:04:23.418 CC module/bdev/split/vbdev_split.o 00:04:23.418 CC module/bdev/null/bdev_null_rpc.o 00:04:23.418 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:23.418 LIB libspdk_bdev_lvol.a 00:04:23.418 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:23.418 SO libspdk_bdev_lvol.so.6.0 00:04:23.418 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:23.418 LIB libspdk_bdev_null.a 00:04:23.676 SYMLINK libspdk_bdev_lvol.so 00:04:23.676 CC module/bdev/split/vbdev_split_rpc.o 00:04:23.676 CC module/bdev/raid/bdev_raid_sb.o 00:04:23.676 SO libspdk_bdev_null.so.6.0 00:04:23.676 CC module/bdev/nvme/nvme_rpc.o 00:04:23.676 LIB libspdk_bdev_passthru.a 00:04:23.676 SYMLINK libspdk_bdev_null.so 00:04:23.676 CC module/bdev/nvme/bdev_mdns_client.o 00:04:23.676 SO libspdk_bdev_passthru.so.6.0 00:04:23.676 LIB libspdk_bdev_malloc.a 00:04:23.676 SO libspdk_bdev_malloc.so.6.0 00:04:23.676 LIB libspdk_bdev_split.a 00:04:23.676 SYMLINK libspdk_bdev_passthru.so 00:04:23.676 CC module/bdev/nvme/vbdev_opal.o 00:04:23.676 SO libspdk_bdev_split.so.6.0 00:04:23.934 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:23.934 SYMLINK libspdk_bdev_malloc.so 00:04:23.934 SYMLINK libspdk_bdev_split.so 00:04:23.934 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:23.934 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:23.934 LIB libspdk_bdev_zone_block.a 00:04:23.934 CC module/bdev/aio/bdev_aio.o 00:04:23.934 CC module/bdev/raid/raid0.o 00:04:23.934 CC module/bdev/ftl/bdev_ftl.o 00:04:23.934 SO libspdk_bdev_zone_block.so.6.0 00:04:23.934 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:24.192 CC module/bdev/iscsi/bdev_iscsi.o 00:04:24.193 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:24.193 SYMLINK libspdk_bdev_zone_block.so 00:04:24.193 CC module/bdev/aio/bdev_aio_rpc.o 00:04:24.193 CC module/bdev/raid/raid1.o 00:04:24.451 CC module/bdev/raid/concat.o 00:04:24.451 CC module/bdev/raid/raid5f.o 00:04:24.451 LIB libspdk_bdev_ftl.a 00:04:24.451 LIB libspdk_bdev_aio.a 00:04:24.451 SO libspdk_bdev_ftl.so.6.0 00:04:24.451 SO libspdk_bdev_aio.so.6.0 00:04:24.451 LIB libspdk_bdev_iscsi.a 00:04:24.451 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:24.451 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:24.451 SYMLINK libspdk_bdev_ftl.so 00:04:24.451 SO libspdk_bdev_iscsi.so.6.0 00:04:24.451 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:24.709 SYMLINK libspdk_bdev_aio.so 00:04:24.709 SYMLINK libspdk_bdev_iscsi.so 00:04:24.966 LIB libspdk_bdev_raid.a 00:04:25.224 SO libspdk_bdev_raid.so.6.0 00:04:25.224 LIB libspdk_bdev_virtio.a 00:04:25.224 SYMLINK libspdk_bdev_raid.so 00:04:25.224 SO libspdk_bdev_virtio.so.6.0 00:04:25.482 SYMLINK libspdk_bdev_virtio.so 00:04:26.417 LIB libspdk_bdev_nvme.a 00:04:26.417 SO libspdk_bdev_nvme.so.7.0 00:04:26.675 SYMLINK libspdk_bdev_nvme.so 00:04:27.240 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:27.240 CC module/event/subsystems/scheduler/scheduler.o 00:04:27.240 CC module/event/subsystems/iobuf/iobuf.o 00:04:27.240 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:27.240 CC module/event/subsystems/keyring/keyring.o 00:04:27.240 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:27.240 CC module/event/subsystems/vmd/vmd.o 00:04:27.240 CC module/event/subsystems/sock/sock.o 00:04:27.240 CC module/event/subsystems/fsdev/fsdev.o 00:04:27.240 LIB libspdk_event_scheduler.a 00:04:27.240 LIB libspdk_event_sock.a 00:04:27.240 LIB libspdk_event_keyring.a 00:04:27.240 LIB libspdk_event_vhost_blk.a 00:04:27.240 SO libspdk_event_scheduler.so.4.0 00:04:27.240 SO libspdk_event_sock.so.5.0 00:04:27.240 LIB libspdk_event_vmd.a 00:04:27.240 SO libspdk_event_keyring.so.1.0 00:04:27.240 SO libspdk_event_vhost_blk.so.3.0 00:04:27.240 LIB libspdk_event_iobuf.a 00:04:27.240 LIB libspdk_event_fsdev.a 00:04:27.240 SO libspdk_event_vmd.so.6.0 00:04:27.240 SO libspdk_event_iobuf.so.3.0 00:04:27.240 SYMLINK libspdk_event_scheduler.so 00:04:27.536 SYMLINK libspdk_event_sock.so 00:04:27.536 SO libspdk_event_fsdev.so.1.0 00:04:27.536 SYMLINK libspdk_event_keyring.so 00:04:27.536 SYMLINK libspdk_event_vhost_blk.so 00:04:27.536 SYMLINK libspdk_event_iobuf.so 00:04:27.536 SYMLINK libspdk_event_vmd.so 00:04:27.536 SYMLINK libspdk_event_fsdev.so 00:04:27.794 CC module/event/subsystems/accel/accel.o 00:04:27.794 LIB libspdk_event_accel.a 00:04:28.052 SO libspdk_event_accel.so.6.0 00:04:28.052 SYMLINK libspdk_event_accel.so 00:04:28.310 CC module/event/subsystems/bdev/bdev.o 00:04:28.568 LIB libspdk_event_bdev.a 00:04:28.568 SO libspdk_event_bdev.so.6.0 00:04:28.568 SYMLINK libspdk_event_bdev.so 00:04:28.827 CC module/event/subsystems/nbd/nbd.o 00:04:28.827 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.827 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.827 CC module/event/subsystems/ublk/ublk.o 00:04:28.827 CC module/event/subsystems/scsi/scsi.o 00:04:29.085 LIB libspdk_event_nbd.a 00:04:29.085 LIB libspdk_event_ublk.a 00:04:29.085 SO libspdk_event_nbd.so.6.0 00:04:29.085 LIB libspdk_event_scsi.a 00:04:29.085 SO libspdk_event_ublk.so.3.0 00:04:29.085 SO libspdk_event_scsi.so.6.0 00:04:29.085 SYMLINK libspdk_event_nbd.so 00:04:29.085 SYMLINK libspdk_event_ublk.so 00:04:29.085 SYMLINK libspdk_event_scsi.so 00:04:29.343 LIB libspdk_event_nvmf.a 00:04:29.343 SO libspdk_event_nvmf.so.6.0 00:04:29.343 SYMLINK libspdk_event_nvmf.so 00:04:29.343 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.343 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.600 LIB libspdk_event_vhost_scsi.a 00:04:29.600 LIB libspdk_event_iscsi.a 00:04:29.600 SO libspdk_event_vhost_scsi.so.3.0 00:04:29.600 SO libspdk_event_iscsi.so.6.0 00:04:29.858 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.858 SYMLINK libspdk_event_iscsi.so 00:04:29.858 SO libspdk.so.6.0 00:04:29.858 SYMLINK libspdk.so 00:04:30.115 CC app/trace_record/trace_record.o 00:04:30.115 CXX app/trace/trace.o 00:04:30.115 CC app/spdk_lspci/spdk_lspci.o 00:04:30.115 CC app/spdk_nvme_perf/perf.o 00:04:30.115 CC app/nvmf_tgt/nvmf_main.o 00:04:30.373 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.373 CC app/spdk_tgt/spdk_tgt.o 00:04:30.373 CC examples/ioat/perf/perf.o 00:04:30.373 CC examples/util/zipf/zipf.o 00:04:30.373 CC test/thread/poller_perf/poller_perf.o 00:04:30.373 LINK spdk_lspci 00:04:30.373 LINK nvmf_tgt 00:04:30.630 LINK poller_perf 00:04:30.631 LINK zipf 00:04:30.631 LINK ioat_perf 00:04:30.631 LINK spdk_trace_record 00:04:30.631 LINK spdk_tgt 00:04:30.631 LINK iscsi_tgt 00:04:30.631 LINK spdk_trace 00:04:30.888 CC app/spdk_nvme_identify/identify.o 00:04:30.888 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.888 CC examples/ioat/verify/verify.o 00:04:30.888 CC app/spdk_top/spdk_top.o 00:04:30.888 CC app/spdk_dd/spdk_dd.o 00:04:30.888 CC test/dma/test_dma/test_dma.o 00:04:30.888 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:30.888 CC app/fio/nvme/fio_plugin.o 00:04:31.146 LINK spdk_nvme_discover 00:04:31.146 LINK verify 00:04:31.146 CC examples/thread/thread/thread_ex.o 00:04:31.146 LINK interrupt_tgt 00:04:31.404 LINK spdk_dd 00:04:31.404 CC app/vhost/vhost.o 00:04:31.404 LINK spdk_nvme_perf 00:04:31.404 CC examples/sock/hello_world/hello_sock.o 00:04:31.404 LINK thread 00:04:31.404 LINK test_dma 00:04:31.661 LINK vhost 00:04:31.661 CC examples/vmd/lsvmd/lsvmd.o 00:04:31.661 CC examples/vmd/led/led.o 00:04:31.661 LINK spdk_nvme 00:04:31.661 CC examples/idxd/perf/perf.o 00:04:31.661 CC app/fio/bdev/fio_plugin.o 00:04:31.920 LINK lsvmd 00:04:31.920 LINK hello_sock 00:04:31.920 LINK led 00:04:31.920 LINK spdk_nvme_identify 00:04:31.920 CC test/app/bdev_svc/bdev_svc.o 00:04:31.920 LINK spdk_top 00:04:31.920 CC examples/accel/perf/accel_perf.o 00:04:31.920 CC test/blobfs/mkfs/mkfs.o 00:04:32.177 CC test/app/histogram_perf/histogram_perf.o 00:04:32.178 CC test/app/jsoncat/jsoncat.o 00:04:32.178 LINK bdev_svc 00:04:32.178 CC test/app/stub/stub.o 00:04:32.178 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:32.178 LINK idxd_perf 00:04:32.178 LINK jsoncat 00:04:32.178 LINK histogram_perf 00:04:32.178 LINK mkfs 00:04:32.178 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:32.453 LINK stub 00:04:32.453 LINK spdk_bdev 00:04:32.453 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:32.453 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:32.728 CC examples/nvme/hello_world/hello_world.o 00:04:32.728 CC examples/blob/hello_world/hello_blob.o 00:04:32.728 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:32.728 TEST_HEADER include/spdk/accel.h 00:04:32.728 TEST_HEADER include/spdk/accel_module.h 00:04:32.728 TEST_HEADER include/spdk/assert.h 00:04:32.728 TEST_HEADER include/spdk/barrier.h 00:04:32.728 TEST_HEADER include/spdk/base64.h 00:04:32.728 TEST_HEADER include/spdk/bdev.h 00:04:32.728 TEST_HEADER include/spdk/bdev_module.h 00:04:32.728 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.728 TEST_HEADER include/spdk/bit_array.h 00:04:32.728 TEST_HEADER include/spdk/bit_pool.h 00:04:32.728 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.728 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.728 LINK nvme_fuzz 00:04:32.728 TEST_HEADER include/spdk/blobfs.h 00:04:32.728 TEST_HEADER include/spdk/blob.h 00:04:32.728 TEST_HEADER include/spdk/conf.h 00:04:32.728 LINK accel_perf 00:04:32.728 TEST_HEADER include/spdk/config.h 00:04:32.728 TEST_HEADER include/spdk/cpuset.h 00:04:32.728 TEST_HEADER include/spdk/crc16.h 00:04:32.728 TEST_HEADER include/spdk/crc32.h 00:04:32.728 TEST_HEADER include/spdk/crc64.h 00:04:32.728 TEST_HEADER include/spdk/dif.h 00:04:32.728 TEST_HEADER include/spdk/dma.h 00:04:32.728 TEST_HEADER include/spdk/endian.h 00:04:32.728 TEST_HEADER include/spdk/env_dpdk.h 00:04:32.728 TEST_HEADER include/spdk/env.h 00:04:32.728 TEST_HEADER include/spdk/event.h 00:04:32.728 TEST_HEADER include/spdk/fd_group.h 00:04:32.728 TEST_HEADER include/spdk/fd.h 00:04:32.728 TEST_HEADER include/spdk/file.h 00:04:32.728 TEST_HEADER include/spdk/fsdev.h 00:04:32.728 TEST_HEADER include/spdk/fsdev_module.h 00:04:32.728 TEST_HEADER include/spdk/ftl.h 00:04:32.728 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:32.728 TEST_HEADER include/spdk/gpt_spec.h 00:04:32.728 TEST_HEADER include/spdk/hexlify.h 00:04:32.728 TEST_HEADER include/spdk/histogram_data.h 00:04:32.728 TEST_HEADER include/spdk/idxd.h 00:04:32.728 TEST_HEADER include/spdk/idxd_spec.h 00:04:32.728 TEST_HEADER include/spdk/init.h 00:04:32.728 TEST_HEADER include/spdk/ioat.h 00:04:32.728 TEST_HEADER include/spdk/ioat_spec.h 00:04:32.728 TEST_HEADER include/spdk/iscsi_spec.h 00:04:32.728 TEST_HEADER include/spdk/json.h 00:04:32.728 TEST_HEADER include/spdk/jsonrpc.h 00:04:32.728 TEST_HEADER include/spdk/keyring.h 00:04:32.728 TEST_HEADER include/spdk/keyring_module.h 00:04:32.728 TEST_HEADER include/spdk/likely.h 00:04:32.728 TEST_HEADER include/spdk/log.h 00:04:32.728 TEST_HEADER include/spdk/lvol.h 00:04:32.728 TEST_HEADER include/spdk/md5.h 00:04:32.728 TEST_HEADER include/spdk/memory.h 00:04:32.728 TEST_HEADER include/spdk/mmio.h 00:04:32.728 TEST_HEADER include/spdk/nbd.h 00:04:32.728 TEST_HEADER include/spdk/net.h 00:04:32.728 TEST_HEADER include/spdk/notify.h 00:04:32.728 TEST_HEADER include/spdk/nvme.h 00:04:32.729 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.729 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.729 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.729 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.729 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.729 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.729 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.729 TEST_HEADER include/spdk/nvmf.h 00:04:32.729 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.729 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.729 TEST_HEADER include/spdk/opal.h 00:04:32.729 TEST_HEADER include/spdk/opal_spec.h 00:04:32.729 TEST_HEADER include/spdk/pci_ids.h 00:04:32.729 TEST_HEADER include/spdk/pipe.h 00:04:32.729 TEST_HEADER include/spdk/queue.h 00:04:32.729 TEST_HEADER include/spdk/reduce.h 00:04:32.729 TEST_HEADER include/spdk/rpc.h 00:04:32.729 CC test/event/event_perf/event_perf.o 00:04:32.729 TEST_HEADER include/spdk/scheduler.h 00:04:32.729 TEST_HEADER include/spdk/scsi.h 00:04:32.729 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.729 TEST_HEADER include/spdk/sock.h 00:04:32.729 TEST_HEADER include/spdk/stdinc.h 00:04:32.729 TEST_HEADER include/spdk/string.h 00:04:32.729 TEST_HEADER include/spdk/thread.h 00:04:32.729 CC test/env/mem_callbacks/mem_callbacks.o 00:04:32.729 TEST_HEADER include/spdk/trace.h 00:04:32.729 TEST_HEADER include/spdk/trace_parser.h 00:04:32.729 TEST_HEADER include/spdk/tree.h 00:04:32.729 TEST_HEADER include/spdk/ublk.h 00:04:32.729 TEST_HEADER include/spdk/util.h 00:04:32.987 TEST_HEADER include/spdk/uuid.h 00:04:32.987 TEST_HEADER include/spdk/version.h 00:04:32.987 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:32.987 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:32.987 TEST_HEADER include/spdk/vhost.h 00:04:32.987 TEST_HEADER include/spdk/vmd.h 00:04:32.987 TEST_HEADER include/spdk/xor.h 00:04:32.987 TEST_HEADER include/spdk/zipf.h 00:04:32.987 CXX test/cpp_headers/accel.o 00:04:32.987 LINK hello_world 00:04:32.987 LINK hello_blob 00:04:32.987 CC test/event/reactor/reactor.o 00:04:32.987 LINK hello_fsdev 00:04:32.987 CC test/event/reactor_perf/reactor_perf.o 00:04:32.987 LINK event_perf 00:04:32.987 LINK vhost_fuzz 00:04:32.987 CXX test/cpp_headers/accel_module.o 00:04:33.244 LINK reactor 00:04:33.244 CC examples/nvme/reconnect/reconnect.o 00:04:33.244 LINK reactor_perf 00:04:33.244 CC examples/blob/cli/blobcli.o 00:04:33.244 CXX test/cpp_headers/assert.o 00:04:33.244 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.244 CC test/env/vtophys/vtophys.o 00:04:33.501 CC examples/bdev/hello_world/hello_bdev.o 00:04:33.501 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:33.501 CC test/event/app_repeat/app_repeat.o 00:04:33.501 CXX test/cpp_headers/barrier.o 00:04:33.501 LINK mem_callbacks 00:04:33.501 LINK vtophys 00:04:33.502 LINK reconnect 00:04:33.502 LINK env_dpdk_post_init 00:04:33.760 LINK hello_bdev 00:04:33.760 LINK app_repeat 00:04:33.760 CXX test/cpp_headers/base64.o 00:04:33.760 CC examples/nvme/arbitration/arbitration.o 00:04:33.760 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.760 CC test/env/memory/memory_ut.o 00:04:33.760 LINK blobcli 00:04:33.760 CC test/event/scheduler/scheduler.o 00:04:33.760 CXX test/cpp_headers/bdev.o 00:04:34.018 LINK nvme_manage 00:04:34.018 CXX test/cpp_headers/bdev_module.o 00:04:34.018 CC test/nvme/aer/aer.o 00:04:34.018 CC test/lvol/esnap/esnap.o 00:04:34.276 CC test/rpc_client/rpc_client_test.o 00:04:34.276 LINK scheduler 00:04:34.276 LINK arbitration 00:04:34.276 CC test/accel/dif/dif.o 00:04:34.276 CXX test/cpp_headers/bdev_zone.o 00:04:34.276 LINK rpc_client_test 00:04:34.534 CC examples/nvme/hotplug/hotplug.o 00:04:34.534 CC test/env/pci/pci_ut.o 00:04:34.534 LINK aer 00:04:34.534 CXX test/cpp_headers/bit_array.o 00:04:34.534 CC test/nvme/reset/reset.o 00:04:34.793 LINK iscsi_fuzz 00:04:34.793 LINK hotplug 00:04:34.793 CC test/nvme/sgl/sgl.o 00:04:34.793 CXX test/cpp_headers/bit_pool.o 00:04:34.793 LINK bdevperf 00:04:35.051 CXX test/cpp_headers/blob_bdev.o 00:04:35.051 LINK pci_ut 00:04:35.051 LINK reset 00:04:35.051 CC test/nvme/e2edp/nvme_dp.o 00:04:35.051 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.051 LINK sgl 00:04:35.051 CC test/nvme/overhead/overhead.o 00:04:35.051 CXX test/cpp_headers/blobfs_bdev.o 00:04:35.308 LINK dif 00:04:35.308 LINK cmb_copy 00:04:35.308 CC test/nvme/err_injection/err_injection.o 00:04:35.308 LINK memory_ut 00:04:35.308 CC test/nvme/startup/startup.o 00:04:35.308 LINK nvme_dp 00:04:35.308 CC test/nvme/reserve/reserve.o 00:04:35.308 CXX test/cpp_headers/blobfs.o 00:04:35.565 CXX test/cpp_headers/blob.o 00:04:35.565 LINK err_injection 00:04:35.565 CC examples/nvme/abort/abort.o 00:04:35.565 CXX test/cpp_headers/conf.o 00:04:35.565 LINK overhead 00:04:35.565 LINK startup 00:04:35.565 LINK reserve 00:04:35.565 CC test/nvme/simple_copy/simple_copy.o 00:04:35.824 CXX test/cpp_headers/config.o 00:04:35.824 CXX test/cpp_headers/cpuset.o 00:04:35.824 CC test/nvme/connect_stress/connect_stress.o 00:04:35.824 CC test/bdev/bdevio/bdevio.o 00:04:35.824 CC test/nvme/boot_partition/boot_partition.o 00:04:35.824 CC test/nvme/compliance/nvme_compliance.o 00:04:35.824 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.824 CXX test/cpp_headers/crc16.o 00:04:35.824 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.824 LINK boot_partition 00:04:36.084 LINK simple_copy 00:04:36.084 LINK connect_stress 00:04:36.084 LINK abort 00:04:36.084 LINK fused_ordering 00:04:36.084 CXX test/cpp_headers/crc32.o 00:04:36.084 CXX test/cpp_headers/crc64.o 00:04:36.084 LINK doorbell_aers 00:04:36.342 CC test/nvme/cuse/cuse.o 00:04:36.342 CC test/nvme/fdp/fdp.o 00:04:36.342 LINK nvme_compliance 00:04:36.342 CXX test/cpp_headers/dif.o 00:04:36.342 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:36.342 LINK bdevio 00:04:36.342 CXX test/cpp_headers/dma.o 00:04:36.342 CXX test/cpp_headers/endian.o 00:04:36.342 CXX test/cpp_headers/env_dpdk.o 00:04:36.342 CXX test/cpp_headers/env.o 00:04:36.342 CXX test/cpp_headers/event.o 00:04:36.600 CXX test/cpp_headers/fd_group.o 00:04:36.600 CXX test/cpp_headers/fd.o 00:04:36.600 CXX test/cpp_headers/file.o 00:04:36.600 LINK pmr_persistence 00:04:36.600 CXX test/cpp_headers/fsdev.o 00:04:36.600 CXX test/cpp_headers/fsdev_module.o 00:04:36.600 CXX test/cpp_headers/ftl.o 00:04:36.600 LINK fdp 00:04:36.600 CXX test/cpp_headers/fuse_dispatcher.o 00:04:36.600 CXX test/cpp_headers/gpt_spec.o 00:04:36.600 CXX test/cpp_headers/hexlify.o 00:04:36.600 CXX test/cpp_headers/histogram_data.o 00:04:36.858 CXX test/cpp_headers/idxd.o 00:04:36.858 CXX test/cpp_headers/idxd_spec.o 00:04:36.858 CXX test/cpp_headers/init.o 00:04:36.858 CXX test/cpp_headers/ioat.o 00:04:36.858 CXX test/cpp_headers/ioat_spec.o 00:04:36.858 CXX test/cpp_headers/iscsi_spec.o 00:04:36.858 CC examples/nvmf/nvmf/nvmf.o 00:04:36.858 CXX test/cpp_headers/json.o 00:04:37.115 CXX test/cpp_headers/jsonrpc.o 00:04:37.115 CXX test/cpp_headers/keyring.o 00:04:37.115 CXX test/cpp_headers/keyring_module.o 00:04:37.115 CXX test/cpp_headers/likely.o 00:04:37.115 CXX test/cpp_headers/log.o 00:04:37.115 CXX test/cpp_headers/lvol.o 00:04:37.115 CXX test/cpp_headers/md5.o 00:04:37.115 CXX test/cpp_headers/memory.o 00:04:37.115 CXX test/cpp_headers/mmio.o 00:04:37.372 CXX test/cpp_headers/nbd.o 00:04:37.372 CXX test/cpp_headers/net.o 00:04:37.372 CXX test/cpp_headers/notify.o 00:04:37.372 CXX test/cpp_headers/nvme.o 00:04:37.372 CXX test/cpp_headers/nvme_intel.o 00:04:37.372 LINK nvmf 00:04:37.372 CXX test/cpp_headers/nvme_ocssd.o 00:04:37.372 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:37.372 CXX test/cpp_headers/nvme_spec.o 00:04:37.372 CXX test/cpp_headers/nvme_zns.o 00:04:37.372 CXX test/cpp_headers/nvmf_cmd.o 00:04:37.372 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:37.630 CXX test/cpp_headers/nvmf.o 00:04:37.630 CXX test/cpp_headers/nvmf_spec.o 00:04:37.630 CXX test/cpp_headers/nvmf_transport.o 00:04:37.630 CXX test/cpp_headers/opal.o 00:04:37.630 CXX test/cpp_headers/opal_spec.o 00:04:37.630 CXX test/cpp_headers/pci_ids.o 00:04:37.630 CXX test/cpp_headers/pipe.o 00:04:37.630 CXX test/cpp_headers/queue.o 00:04:37.630 CXX test/cpp_headers/reduce.o 00:04:37.630 CXX test/cpp_headers/rpc.o 00:04:37.630 CXX test/cpp_headers/scheduler.o 00:04:37.888 CXX test/cpp_headers/scsi.o 00:04:37.888 CXX test/cpp_headers/scsi_spec.o 00:04:37.888 CXX test/cpp_headers/sock.o 00:04:37.888 CXX test/cpp_headers/stdinc.o 00:04:37.888 CXX test/cpp_headers/string.o 00:04:37.888 LINK cuse 00:04:37.888 CXX test/cpp_headers/thread.o 00:04:37.888 CXX test/cpp_headers/trace.o 00:04:37.888 CXX test/cpp_headers/trace_parser.o 00:04:38.145 CXX test/cpp_headers/tree.o 00:04:38.145 CXX test/cpp_headers/ublk.o 00:04:38.145 CXX test/cpp_headers/util.o 00:04:38.145 CXX test/cpp_headers/uuid.o 00:04:38.145 CXX test/cpp_headers/version.o 00:04:38.145 CXX test/cpp_headers/vfio_user_pci.o 00:04:38.145 CXX test/cpp_headers/vfio_user_spec.o 00:04:38.145 CXX test/cpp_headers/vhost.o 00:04:38.145 CXX test/cpp_headers/vmd.o 00:04:38.145 CXX test/cpp_headers/xor.o 00:04:38.145 CXX test/cpp_headers/zipf.o 00:04:41.432 LINK esnap 00:04:41.691 00:04:41.691 real 1m47.321s 00:04:41.691 user 10m13.189s 00:04:41.691 sys 2m0.833s 00:04:41.691 ************************************ 00:04:41.691 END TEST make 00:04:41.691 ************************************ 00:04:41.691 16:12:34 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:41.691 16:12:34 make -- common/autotest_common.sh@10 -- $ set +x 00:04:41.691 16:12:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:41.691 16:12:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:41.691 16:12:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:41.691 16:12:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.691 16:12:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:41.691 16:12:34 -- pm/common@44 -- $ pid=5295 00:04:41.691 16:12:34 -- pm/common@50 -- $ kill -TERM 5295 00:04:41.691 16:12:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.691 16:12:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:41.691 16:12:34 -- pm/common@44 -- $ pid=5297 00:04:41.691 16:12:34 -- pm/common@50 -- $ kill -TERM 5297 00:04:41.983 16:12:35 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.983 16:12:35 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.983 16:12:35 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.983 16:12:35 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.983 16:12:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.983 16:12:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.983 16:12:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.983 16:12:35 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.983 16:12:35 -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.983 16:12:35 -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.983 16:12:35 -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.983 16:12:35 -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.983 16:12:35 -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.984 16:12:35 -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.984 16:12:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.984 16:12:35 -- scripts/common.sh@344 -- # case "$op" in 00:04:41.984 16:12:35 -- scripts/common.sh@345 -- # : 1 00:04:41.984 16:12:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.984 16:12:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.984 16:12:35 -- scripts/common.sh@365 -- # decimal 1 00:04:41.984 16:12:35 -- scripts/common.sh@353 -- # local d=1 00:04:41.984 16:12:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.984 16:12:35 -- scripts/common.sh@355 -- # echo 1 00:04:41.984 16:12:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.984 16:12:35 -- scripts/common.sh@366 -- # decimal 2 00:04:41.984 16:12:35 -- scripts/common.sh@353 -- # local d=2 00:04:41.984 16:12:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.984 16:12:35 -- scripts/common.sh@355 -- # echo 2 00:04:41.984 16:12:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.984 16:12:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.984 16:12:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.984 16:12:35 -- scripts/common.sh@368 -- # return 0 00:04:41.984 16:12:35 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.984 16:12:35 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.984 --rc genhtml_branch_coverage=1 00:04:41.984 --rc genhtml_function_coverage=1 00:04:41.984 --rc genhtml_legend=1 00:04:41.984 --rc geninfo_all_blocks=1 00:04:41.984 --rc geninfo_unexecuted_blocks=1 00:04:41.984 00:04:41.984 ' 00:04:41.984 16:12:35 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.984 --rc genhtml_branch_coverage=1 00:04:41.984 --rc genhtml_function_coverage=1 00:04:41.984 --rc genhtml_legend=1 00:04:41.984 --rc geninfo_all_blocks=1 00:04:41.984 --rc geninfo_unexecuted_blocks=1 00:04:41.984 00:04:41.984 ' 00:04:41.984 16:12:35 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.984 --rc genhtml_branch_coverage=1 00:04:41.984 --rc genhtml_function_coverage=1 00:04:41.984 --rc genhtml_legend=1 00:04:41.984 --rc geninfo_all_blocks=1 00:04:41.984 --rc geninfo_unexecuted_blocks=1 00:04:41.984 00:04:41.984 ' 00:04:41.984 16:12:35 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.984 --rc genhtml_branch_coverage=1 00:04:41.984 --rc genhtml_function_coverage=1 00:04:41.984 --rc genhtml_legend=1 00:04:41.984 --rc geninfo_all_blocks=1 00:04:41.984 --rc geninfo_unexecuted_blocks=1 00:04:41.984 00:04:41.984 ' 00:04:41.984 16:12:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.984 16:12:35 -- nvmf/common.sh@7 -- # uname -s 00:04:41.984 16:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.984 16:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.984 16:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.984 16:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.984 16:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.984 16:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.984 16:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.984 16:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.984 16:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.984 16:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.984 16:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1dd54b7-14e1-4b3b-9dae-e96f98659366 00:04:41.984 16:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1dd54b7-14e1-4b3b-9dae-e96f98659366 00:04:41.984 16:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.984 16:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.984 16:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.984 16:12:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.984 16:12:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.984 16:12:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.984 16:12:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.984 16:12:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.984 16:12:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.984 16:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.984 16:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.984 16:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.984 16:12:35 -- paths/export.sh@5 -- # export PATH 00:04:41.984 16:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.984 16:12:35 -- nvmf/common.sh@51 -- # : 0 00:04:41.984 16:12:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.984 16:12:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.984 16:12:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.984 16:12:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.984 16:12:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.984 16:12:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.984 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.984 16:12:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.984 16:12:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.984 16:12:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.984 16:12:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.984 16:12:35 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.984 16:12:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.984 16:12:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.984 16:12:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.984 16:12:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.984 16:12:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.984 16:12:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.984 16:12:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.984 16:12:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.984 16:12:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54424 00:04:41.984 16:12:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:41.984 16:12:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.984 16:12:35 -- pm/common@17 -- # local monitor 00:04:41.984 16:12:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.984 16:12:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.984 16:12:35 -- pm/common@21 -- # date +%s 00:04:41.984 16:12:35 -- pm/common@25 -- # sleep 1 00:04:41.984 16:12:35 -- pm/common@21 -- # date +%s 00:04:41.984 16:12:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728403955 00:04:41.984 16:12:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728403955 00:04:42.242 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728403955_collect-cpu-load.pm.log 00:04:42.242 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728403955_collect-vmstat.pm.log 00:04:43.178 16:12:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.178 16:12:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:43.178 16:12:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.178 16:12:36 -- common/autotest_common.sh@10 -- # set +x 00:04:43.178 16:12:36 -- spdk/autotest.sh@59 -- # create_test_list 00:04:43.178 16:12:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:43.178 16:12:36 -- common/autotest_common.sh@10 -- # set +x 00:04:43.178 16:12:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:43.178 16:12:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:43.178 16:12:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:43.178 16:12:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:43.179 16:12:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:43.179 16:12:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.179 16:12:36 -- common/autotest_common.sh@1455 -- # uname 00:04:43.179 16:12:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:43.179 16:12:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.179 16:12:36 -- common/autotest_common.sh@1475 -- # uname 00:04:43.179 16:12:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:43.179 16:12:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.179 16:12:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.179 lcov: LCOV version 1.15 00:04:43.179 16:12:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:01.256 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:01.256 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:19.335 16:13:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:19.335 16:13:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.335 16:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.335 16:13:10 -- spdk/autotest.sh@78 -- # rm -f 00:05:19.335 16:13:10 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.335 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:19.335 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:19.335 16:13:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:19.335 16:13:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:19.335 16:13:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:19.335 16:13:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:19.335 16:13:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:19.336 16:13:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:19.336 16:13:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:19.336 16:13:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:19.336 16:13:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:05:19.336 16:13:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:05:19.336 16:13:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:19.336 16:13:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:05:19.336 16:13:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:05:19.336 16:13:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:19.336 16:13:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:19.336 16:13:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:19.336 16:13:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:19.336 16:13:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:19.336 16:13:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:19.336 16:13:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.336 16:13:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:19.336 16:13:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:19.336 16:13:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:19.336 16:13:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:19.336 No valid GPT data, bailing 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # pt= 00:05:19.336 16:13:11 -- scripts/common.sh@395 -- # return 1 00:05:19.336 16:13:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:19.336 1+0 records in 00:05:19.336 1+0 records out 00:05:19.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511554 s, 205 MB/s 00:05:19.336 16:13:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.336 16:13:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:19.336 16:13:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:05:19.336 16:13:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:05:19.336 16:13:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:19.336 No valid GPT data, bailing 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # pt= 00:05:19.336 16:13:11 -- scripts/common.sh@395 -- # return 1 00:05:19.336 16:13:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:19.336 1+0 records in 00:05:19.336 1+0 records out 00:05:19.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526585 s, 199 MB/s 00:05:19.336 16:13:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.336 16:13:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:19.336 16:13:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:05:19.336 16:13:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:05:19.336 16:13:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:19.336 No valid GPT data, bailing 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # pt= 00:05:19.336 16:13:11 -- scripts/common.sh@395 -- # return 1 00:05:19.336 16:13:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:19.336 1+0 records in 00:05:19.336 1+0 records out 00:05:19.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537903 s, 195 MB/s 00:05:19.336 16:13:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.336 16:13:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:19.336 16:13:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:19.336 16:13:11 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:19.336 16:13:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:19.336 No valid GPT data, bailing 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:19.336 16:13:11 -- scripts/common.sh@394 -- # pt= 00:05:19.336 16:13:11 -- scripts/common.sh@395 -- # return 1 00:05:19.336 16:13:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:19.336 1+0 records in 00:05:19.336 1+0 records out 00:05:19.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052886 s, 198 MB/s 00:05:19.336 16:13:11 -- spdk/autotest.sh@105 -- # sync 00:05:19.336 16:13:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:19.336 16:13:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:19.336 16:13:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:20.713 16:13:13 -- spdk/autotest.sh@111 -- # uname -s 00:05:20.713 16:13:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:20.713 16:13:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:20.713 16:13:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:21.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.279 Hugepages 00:05:21.279 node hugesize free / total 00:05:21.279 node0 1048576kB 0 / 0 00:05:21.279 node0 2048kB 0 / 0 00:05:21.279 00:05:21.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.536 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:21.536 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:21.536 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:21.536 16:13:14 -- spdk/autotest.sh@117 -- # uname -s 00:05:21.536 16:13:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:21.536 16:13:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:21.536 16:13:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.362 16:13:15 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:23.737 16:13:16 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:23.737 16:13:16 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:23.737 16:13:16 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.737 16:13:16 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:23.737 16:13:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:23.737 16:13:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:23.737 16:13:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.737 16:13:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.737 16:13:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:23.737 16:13:16 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:23.737 16:13:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:23.737 16:13:16 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.996 Waiting for block devices as requested 00:05:23.996 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.996 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:24.255 16:13:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:24.255 16:13:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:24.255 16:13:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:24.255 16:13:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:24.255 16:13:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:24.255 16:13:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1541 -- # continue 00:05:24.255 16:13:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:24.255 16:13:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:24.255 16:13:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:24.255 16:13:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:24.255 16:13:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:24.255 16:13:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:24.255 16:13:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:24.255 16:13:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:24.255 16:13:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:24.255 16:13:17 -- common/autotest_common.sh@1541 -- # continue 00:05:24.255 16:13:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:24.255 16:13:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.255 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:05:24.255 16:13:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:24.255 16:13:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.255 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:05:24.255 16:13:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.080 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.080 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.080 16:13:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:25.080 16:13:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.080 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:25.080 16:13:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:25.080 16:13:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:25.080 16:13:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:25.080 16:13:18 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:25.080 16:13:18 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:25.080 16:13:18 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:25.080 16:13:18 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:25.080 16:13:18 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:25.080 16:13:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:25.080 16:13:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:25.080 16:13:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.080 16:13:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.080 16:13:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:25.339 16:13:18 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:25.339 16:13:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:25.339 16:13:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:25.339 16:13:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:25.339 16:13:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:25.339 16:13:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.339 16:13:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:25.339 16:13:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:25.339 16:13:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:25.339 16:13:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.339 16:13:18 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:25.339 16:13:18 -- common/autotest_common.sh@1570 -- # return 0 00:05:25.339 16:13:18 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:25.339 16:13:18 -- common/autotest_common.sh@1578 -- # return 0 00:05:25.339 16:13:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:25.339 16:13:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:25.339 16:13:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:25.339 16:13:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:25.339 16:13:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:25.339 16:13:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.339 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:25.339 16:13:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:25.339 16:13:18 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:25.339 16:13:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.339 16:13:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.339 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:25.339 ************************************ 00:05:25.339 START TEST env 00:05:25.339 ************************************ 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:25.339 * Looking for test storage... 00:05:25.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:25.339 16:13:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.339 16:13:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.339 16:13:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.339 16:13:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.339 16:13:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.339 16:13:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.339 16:13:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.339 16:13:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.339 16:13:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.339 16:13:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.339 16:13:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.339 16:13:18 env -- scripts/common.sh@344 -- # case "$op" in 00:05:25.339 16:13:18 env -- scripts/common.sh@345 -- # : 1 00:05:25.339 16:13:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.339 16:13:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.339 16:13:18 env -- scripts/common.sh@365 -- # decimal 1 00:05:25.339 16:13:18 env -- scripts/common.sh@353 -- # local d=1 00:05:25.339 16:13:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.339 16:13:18 env -- scripts/common.sh@355 -- # echo 1 00:05:25.339 16:13:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.339 16:13:18 env -- scripts/common.sh@366 -- # decimal 2 00:05:25.339 16:13:18 env -- scripts/common.sh@353 -- # local d=2 00:05:25.339 16:13:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.339 16:13:18 env -- scripts/common.sh@355 -- # echo 2 00:05:25.339 16:13:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.339 16:13:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.339 16:13:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.339 16:13:18 env -- scripts/common.sh@368 -- # return 0 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.339 --rc genhtml_branch_coverage=1 00:05:25.339 --rc genhtml_function_coverage=1 00:05:25.339 --rc genhtml_legend=1 00:05:25.339 --rc geninfo_all_blocks=1 00:05:25.339 --rc geninfo_unexecuted_blocks=1 00:05:25.339 00:05:25.339 ' 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.339 --rc genhtml_branch_coverage=1 00:05:25.339 --rc genhtml_function_coverage=1 00:05:25.339 --rc genhtml_legend=1 00:05:25.339 --rc geninfo_all_blocks=1 00:05:25.339 --rc geninfo_unexecuted_blocks=1 00:05:25.339 00:05:25.339 ' 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.339 --rc genhtml_branch_coverage=1 00:05:25.339 --rc genhtml_function_coverage=1 00:05:25.339 --rc genhtml_legend=1 00:05:25.339 --rc geninfo_all_blocks=1 00:05:25.339 --rc geninfo_unexecuted_blocks=1 00:05:25.339 00:05:25.339 ' 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.339 --rc genhtml_branch_coverage=1 00:05:25.339 --rc genhtml_function_coverage=1 00:05:25.339 --rc genhtml_legend=1 00:05:25.339 --rc geninfo_all_blocks=1 00:05:25.339 --rc geninfo_unexecuted_blocks=1 00:05:25.339 00:05:25.339 ' 00:05:25.339 16:13:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.339 16:13:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.339 16:13:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.339 ************************************ 00:05:25.339 START TEST env_memory 00:05:25.339 ************************************ 00:05:25.339 16:13:18 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:25.597 00:05:25.597 00:05:25.597 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.597 http://cunit.sourceforge.net/ 00:05:25.597 00:05:25.597 00:05:25.597 Suite: memory 00:05:25.597 Test: alloc and free memory map ...[2024-10-08 16:13:18.722683] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.597 passed 00:05:25.597 Test: mem map translation ...[2024-10-08 16:13:18.783078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.597 [2024-10-08 16:13:18.783160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.597 [2024-10-08 16:13:18.783249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.597 [2024-10-08 16:13:18.783277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.597 passed 00:05:25.597 Test: mem map registration ...[2024-10-08 16:13:18.886327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:25.597 [2024-10-08 16:13:18.886442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:25.856 passed 00:05:25.856 Test: mem map adjacent registrations ...passed 00:05:25.856 00:05:25.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.856 suites 1 1 n/a 0 0 00:05:25.856 tests 4 4 4 0 0 00:05:25.856 asserts 152 152 152 0 n/a 00:05:25.856 00:05:25.856 Elapsed time = 0.348 seconds 00:05:25.856 00:05:25.856 real 0m0.390s 00:05:25.856 user 0m0.358s 00:05:25.856 sys 0m0.025s 00:05:25.856 16:13:19 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.856 ************************************ 00:05:25.856 END TEST env_memory 00:05:25.856 ************************************ 00:05:25.856 16:13:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.856 16:13:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.856 16:13:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.856 16:13:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.856 16:13:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.856 ************************************ 00:05:25.856 START TEST env_vtophys 00:05:25.856 ************************************ 00:05:25.856 16:13:19 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.856 EAL: lib.eal log level changed from notice to debug 00:05:25.856 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 1 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 2 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 3 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 4 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 5 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 6 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 7 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 8 as core 0 on socket 0 00:05:25.856 EAL: Detected lcore 9 as core 0 on socket 0 00:05:26.121 EAL: Maximum logical cores by configuration: 128 00:05:26.121 EAL: Detected CPU lcores: 10 00:05:26.121 EAL: Detected NUMA nodes: 1 00:05:26.121 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:26.121 EAL: Detected shared linkage of DPDK 00:05:26.121 EAL: No shared files mode enabled, IPC will be disabled 00:05:26.121 EAL: Selected IOVA mode 'PA' 00:05:26.121 EAL: Probing VFIO support... 00:05:26.121 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:26.121 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:26.121 EAL: Ask a virtual area of 0x2e000 bytes 00:05:26.121 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:26.121 EAL: Setting up physically contiguous memory... 00:05:26.121 EAL: Setting maximum number of open files to 524288 00:05:26.121 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:26.121 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:26.121 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.121 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:26.121 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.121 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.121 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:26.121 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:26.121 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.121 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:26.121 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.121 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.121 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:26.121 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:26.121 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.121 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:26.121 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.121 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.121 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:26.121 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:26.121 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.121 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:26.121 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.121 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.121 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:26.121 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:26.121 EAL: Hugepages will be freed exactly as allocated. 00:05:26.121 EAL: No shared files mode enabled, IPC is disabled 00:05:26.121 EAL: No shared files mode enabled, IPC is disabled 00:05:26.121 EAL: TSC frequency is ~2200000 KHz 00:05:26.121 EAL: Main lcore 0 is ready (tid=7fb6360cba40;cpuset=[0]) 00:05:26.122 EAL: Trying to obtain current memory policy. 00:05:26.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.122 EAL: Restoring previous memory policy: 0 00:05:26.122 EAL: request: mp_malloc_sync 00:05:26.122 EAL: No shared files mode enabled, IPC is disabled 00:05:26.122 EAL: Heap on socket 0 was expanded by 2MB 00:05:26.122 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:26.122 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:26.122 EAL: Mem event callback 'spdk:(nil)' registered 00:05:26.122 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:26.122 00:05:26.122 00:05:26.122 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.122 http://cunit.sourceforge.net/ 00:05:26.122 00:05:26.122 00:05:26.122 Suite: components_suite 00:05:26.688 Test: vtophys_malloc_test ...passed 00:05:26.688 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:26.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.688 EAL: Restoring previous memory policy: 4 00:05:26.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.688 EAL: request: mp_malloc_sync 00:05:26.688 EAL: No shared files mode enabled, IPC is disabled 00:05:26.688 EAL: Heap on socket 0 was expanded by 4MB 00:05:26.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.688 EAL: request: mp_malloc_sync 00:05:26.688 EAL: No shared files mode enabled, IPC is disabled 00:05:26.688 EAL: Heap on socket 0 was shrunk by 4MB 00:05:26.688 EAL: Trying to obtain current memory policy. 00:05:26.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.688 EAL: Restoring previous memory policy: 4 00:05:26.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.688 EAL: request: mp_malloc_sync 00:05:26.688 EAL: No shared files mode enabled, IPC is disabled 00:05:26.688 EAL: Heap on socket 0 was expanded by 6MB 00:05:26.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.688 EAL: request: mp_malloc_sync 00:05:26.688 EAL: No shared files mode enabled, IPC is disabled 00:05:26.688 EAL: Heap on socket 0 was shrunk by 6MB 00:05:26.688 EAL: Trying to obtain current memory policy. 00:05:26.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.688 EAL: Restoring previous memory policy: 4 00:05:26.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.688 EAL: request: mp_malloc_sync 00:05:26.688 EAL: No shared files mode enabled, IPC is disabled 00:05:26.688 EAL: Heap on socket 0 was expanded by 10MB 00:05:26.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.688 EAL: request: mp_malloc_sync 00:05:26.688 EAL: No shared files mode enabled, IPC is disabled 00:05:26.688 EAL: Heap on socket 0 was shrunk by 10MB 00:05:26.688 EAL: Trying to obtain current memory policy. 00:05:26.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.688 EAL: Restoring previous memory policy: 4 00:05:26.689 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.689 EAL: request: mp_malloc_sync 00:05:26.689 EAL: No shared files mode enabled, IPC is disabled 00:05:26.689 EAL: Heap on socket 0 was expanded by 18MB 00:05:26.689 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.689 EAL: request: mp_malloc_sync 00:05:26.689 EAL: No shared files mode enabled, IPC is disabled 00:05:26.689 EAL: Heap on socket 0 was shrunk by 18MB 00:05:26.689 EAL: Trying to obtain current memory policy. 00:05:26.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.689 EAL: Restoring previous memory policy: 4 00:05:26.689 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.689 EAL: request: mp_malloc_sync 00:05:26.689 EAL: No shared files mode enabled, IPC is disabled 00:05:26.689 EAL: Heap on socket 0 was expanded by 34MB 00:05:26.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.946 EAL: request: mp_malloc_sync 00:05:26.946 EAL: No shared files mode enabled, IPC is disabled 00:05:26.946 EAL: Heap on socket 0 was shrunk by 34MB 00:05:26.946 EAL: Trying to obtain current memory policy. 00:05:26.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.946 EAL: Restoring previous memory policy: 4 00:05:26.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.946 EAL: request: mp_malloc_sync 00:05:26.946 EAL: No shared files mode enabled, IPC is disabled 00:05:26.946 EAL: Heap on socket 0 was expanded by 66MB 00:05:26.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.946 EAL: request: mp_malloc_sync 00:05:26.946 EAL: No shared files mode enabled, IPC is disabled 00:05:26.947 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.205 EAL: Trying to obtain current memory policy. 00:05:27.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.205 EAL: Restoring previous memory policy: 4 00:05:27.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.205 EAL: request: mp_malloc_sync 00:05:27.205 EAL: No shared files mode enabled, IPC is disabled 00:05:27.205 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.464 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.464 EAL: request: mp_malloc_sync 00:05:27.464 EAL: No shared files mode enabled, IPC is disabled 00:05:27.464 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.752 EAL: Trying to obtain current memory policy. 00:05:27.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.752 EAL: Restoring previous memory policy: 4 00:05:27.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.752 EAL: request: mp_malloc_sync 00:05:27.752 EAL: No shared files mode enabled, IPC is disabled 00:05:27.752 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.021 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.279 EAL: request: mp_malloc_sync 00:05:28.279 EAL: No shared files mode enabled, IPC is disabled 00:05:28.279 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.537 EAL: Trying to obtain current memory policy. 00:05:28.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.796 EAL: Restoring previous memory policy: 4 00:05:28.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.796 EAL: request: mp_malloc_sync 00:05:28.796 EAL: No shared files mode enabled, IPC is disabled 00:05:28.796 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.730 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.730 EAL: request: mp_malloc_sync 00:05:29.730 EAL: No shared files mode enabled, IPC is disabled 00:05:29.730 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.664 EAL: Trying to obtain current memory policy. 00:05:30.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.923 EAL: Restoring previous memory policy: 4 00:05:30.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.923 EAL: request: mp_malloc_sync 00:05:30.923 EAL: No shared files mode enabled, IPC is disabled 00:05:30.923 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.363 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.622 EAL: request: mp_malloc_sync 00:05:32.622 EAL: No shared files mode enabled, IPC is disabled 00:05:32.622 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:34.036 passed 00:05:34.036 00:05:34.036 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.036 suites 1 1 n/a 0 0 00:05:34.036 tests 2 2 2 0 0 00:05:34.036 asserts 5579 5579 5579 0 n/a 00:05:34.036 00:05:34.036 Elapsed time = 7.870 seconds 00:05:34.036 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.036 EAL: request: mp_malloc_sync 00:05:34.036 EAL: No shared files mode enabled, IPC is disabled 00:05:34.036 EAL: Heap on socket 0 was shrunk by 2MB 00:05:34.036 EAL: No shared files mode enabled, IPC is disabled 00:05:34.036 EAL: No shared files mode enabled, IPC is disabled 00:05:34.036 EAL: No shared files mode enabled, IPC is disabled 00:05:34.036 00:05:34.036 real 0m8.239s 00:05:34.036 user 0m6.891s 00:05:34.036 sys 0m1.174s 00:05:34.036 ************************************ 00:05:34.036 END TEST env_vtophys 00:05:34.036 ************************************ 00:05:34.036 16:13:27 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.036 16:13:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:34.295 16:13:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.295 16:13:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.295 16:13:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.295 16:13:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.295 ************************************ 00:05:34.295 START TEST env_pci 00:05:34.295 ************************************ 00:05:34.295 16:13:27 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.295 00:05:34.295 00:05:34.295 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.295 http://cunit.sourceforge.net/ 00:05:34.295 00:05:34.295 00:05:34.295 Suite: pci 00:05:34.295 Test: pci_hook ...[2024-10-08 16:13:27.420704] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56775 has claimed it 00:05:34.295 passed 00:05:34.295 00:05:34.295 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.295 suites 1 1 n/a 0 0 00:05:34.295 tests 1 1 1 0 0 00:05:34.296 asserts 25 25 25 0 n/a 00:05:34.296 00:05:34.296 Elapsed time = 0.007 seconds 00:05:34.296 EAL: Cannot find device (10000:00:01.0) 00:05:34.296 EAL: Failed to attach device on primary process 00:05:34.296 ************************************ 00:05:34.296 END TEST env_pci 00:05:34.296 ************************************ 00:05:34.296 00:05:34.296 real 0m0.075s 00:05:34.296 user 0m0.029s 00:05:34.296 sys 0m0.046s 00:05:34.296 16:13:27 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.296 16:13:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:34.296 16:13:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:34.296 16:13:27 env -- env/env.sh@15 -- # uname 00:05:34.296 16:13:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:34.296 16:13:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:34.296 16:13:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.296 16:13:27 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:34.296 16:13:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.296 16:13:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.296 ************************************ 00:05:34.296 START TEST env_dpdk_post_init 00:05:34.296 ************************************ 00:05:34.296 16:13:27 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.296 EAL: Detected CPU lcores: 10 00:05:34.296 EAL: Detected NUMA nodes: 1 00:05:34.296 EAL: Detected shared linkage of DPDK 00:05:34.554 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.554 EAL: Selected IOVA mode 'PA' 00:05:34.554 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.554 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:34.554 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:34.554 Starting DPDK initialization... 00:05:34.554 Starting SPDK post initialization... 00:05:34.554 SPDK NVMe probe 00:05:34.554 Attaching to 0000:00:10.0 00:05:34.554 Attaching to 0000:00:11.0 00:05:34.554 Attached to 0000:00:10.0 00:05:34.554 Attached to 0000:00:11.0 00:05:34.554 Cleaning up... 00:05:34.554 00:05:34.554 real 0m0.289s 00:05:34.554 user 0m0.083s 00:05:34.554 sys 0m0.106s 00:05:34.554 16:13:27 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.554 16:13:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.554 ************************************ 00:05:34.554 END TEST env_dpdk_post_init 00:05:34.554 ************************************ 00:05:34.554 16:13:27 env -- env/env.sh@26 -- # uname 00:05:34.554 16:13:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.554 16:13:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.554 16:13:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.554 16:13:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.554 16:13:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.554 ************************************ 00:05:34.554 START TEST env_mem_callbacks 00:05:34.554 ************************************ 00:05:34.554 16:13:27 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.812 EAL: Detected CPU lcores: 10 00:05:34.812 EAL: Detected NUMA nodes: 1 00:05:34.812 EAL: Detected shared linkage of DPDK 00:05:34.812 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.812 EAL: Selected IOVA mode 'PA' 00:05:34.812 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.812 00:05:34.812 00:05:34.812 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.812 http://cunit.sourceforge.net/ 00:05:34.812 00:05:34.812 00:05:34.812 Suite: memory 00:05:34.812 Test: test ... 00:05:34.812 register 0x200000200000 2097152 00:05:34.812 malloc 3145728 00:05:34.812 register 0x200000400000 4194304 00:05:34.812 buf 0x2000004fffc0 len 3145728 PASSED 00:05:34.812 malloc 64 00:05:34.812 buf 0x2000004ffec0 len 64 PASSED 00:05:34.812 malloc 4194304 00:05:34.812 register 0x200000800000 6291456 00:05:34.812 buf 0x2000009fffc0 len 4194304 PASSED 00:05:34.812 free 0x2000004fffc0 3145728 00:05:34.812 free 0x2000004ffec0 64 00:05:34.812 unregister 0x200000400000 4194304 PASSED 00:05:34.812 free 0x2000009fffc0 4194304 00:05:34.812 unregister 0x200000800000 6291456 PASSED 00:05:34.812 malloc 8388608 00:05:34.813 register 0x200000400000 10485760 00:05:34.813 buf 0x2000005fffc0 len 8388608 PASSED 00:05:34.813 free 0x2000005fffc0 8388608 00:05:34.813 unregister 0x200000400000 10485760 PASSED 00:05:35.071 passed 00:05:35.071 00:05:35.071 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.071 suites 1 1 n/a 0 0 00:05:35.071 tests 1 1 1 0 0 00:05:35.071 asserts 15 15 15 0 n/a 00:05:35.071 00:05:35.071 Elapsed time = 0.059 seconds 00:05:35.071 00:05:35.071 real 0m0.291s 00:05:35.071 user 0m0.107s 00:05:35.071 sys 0m0.077s 00:05:35.071 16:13:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.071 16:13:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:35.071 ************************************ 00:05:35.071 END TEST env_mem_callbacks 00:05:35.071 ************************************ 00:05:35.071 ************************************ 00:05:35.071 END TEST env 00:05:35.071 ************************************ 00:05:35.071 00:05:35.071 real 0m9.763s 00:05:35.071 user 0m7.680s 00:05:35.071 sys 0m1.678s 00:05:35.071 16:13:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.071 16:13:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.071 16:13:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:35.071 16:13:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.071 16:13:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.071 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:35.071 ************************************ 00:05:35.071 START TEST rpc 00:05:35.071 ************************************ 00:05:35.071 16:13:28 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:35.071 * Looking for test storage... 00:05:35.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.071 16:13:28 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.071 16:13:28 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.071 16:13:28 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.329 16:13:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.329 16:13:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.329 16:13:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.329 16:13:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.329 16:13:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.329 16:13:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:35.329 16:13:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.329 16:13:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.329 16:13:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.329 16:13:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.329 16:13:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.329 16:13:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.329 16:13:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.329 16:13:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.329 16:13:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 16:13:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56902 00:05:35.329 16:13:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.329 16:13:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:35.329 16:13:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56902 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@831 -- # '[' -z 56902 ']' 00:05:35.329 16:13:28 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.330 16:13:28 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.330 16:13:28 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.330 16:13:28 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.330 16:13:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.330 [2024-10-08 16:13:28.635170] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:35.330 [2024-10-08 16:13:28.635358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56902 ] 00:05:35.587 [2024-10-08 16:13:28.812581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.846 [2024-10-08 16:13:29.049670] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:35.846 [2024-10-08 16:13:29.049740] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56902' to capture a snapshot of events at runtime. 00:05:35.846 [2024-10-08 16:13:29.049757] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.846 [2024-10-08 16:13:29.049772] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.846 [2024-10-08 16:13:29.049784] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56902 for offline analysis/debug. 00:05:35.846 [2024-10-08 16:13:29.051131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.799 16:13:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.799 16:13:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.799 16:13:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.799 16:13:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.799 16:13:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.799 16:13:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.799 16:13:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.799 16:13:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.799 16:13:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.799 ************************************ 00:05:36.799 START TEST rpc_integrity 00:05:36.799 ************************************ 00:05:36.799 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:36.799 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.799 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.799 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.799 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.799 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.799 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.799 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.799 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.799 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.799 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.071 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.071 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.071 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.071 { 00:05:37.071 "name": "Malloc0", 00:05:37.071 "aliases": [ 00:05:37.071 "22280eab-c184-448a-9e02-470f49db58be" 00:05:37.071 ], 00:05:37.071 "product_name": "Malloc disk", 00:05:37.071 "block_size": 512, 00:05:37.071 "num_blocks": 16384, 00:05:37.071 "uuid": "22280eab-c184-448a-9e02-470f49db58be", 00:05:37.071 "assigned_rate_limits": { 00:05:37.071 "rw_ios_per_sec": 0, 00:05:37.071 "rw_mbytes_per_sec": 0, 00:05:37.071 "r_mbytes_per_sec": 0, 00:05:37.071 "w_mbytes_per_sec": 0 00:05:37.071 }, 00:05:37.071 "claimed": false, 00:05:37.071 "zoned": false, 00:05:37.071 "supported_io_types": { 00:05:37.071 "read": true, 00:05:37.071 "write": true, 00:05:37.071 "unmap": true, 00:05:37.071 "flush": true, 00:05:37.071 "reset": true, 00:05:37.071 "nvme_admin": false, 00:05:37.071 "nvme_io": false, 00:05:37.071 "nvme_io_md": false, 00:05:37.071 "write_zeroes": true, 00:05:37.071 "zcopy": true, 00:05:37.071 "get_zone_info": false, 00:05:37.071 "zone_management": false, 00:05:37.071 "zone_append": false, 00:05:37.071 "compare": false, 00:05:37.071 "compare_and_write": false, 00:05:37.071 "abort": true, 00:05:37.071 "seek_hole": false, 00:05:37.071 "seek_data": false, 00:05:37.071 "copy": true, 00:05:37.071 "nvme_iov_md": false 00:05:37.071 }, 00:05:37.071 "memory_domains": [ 00:05:37.071 { 00:05:37.071 "dma_device_id": "system", 00:05:37.071 "dma_device_type": 1 00:05:37.071 }, 00:05:37.071 { 00:05:37.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.071 "dma_device_type": 2 00:05:37.071 } 00:05:37.071 ], 00:05:37.071 "driver_specific": {} 00:05:37.071 } 00:05:37.071 ]' 00:05:37.071 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.071 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.071 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.071 [2024-10-08 16:13:30.206087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.071 [2024-10-08 16:13:30.206231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.071 [2024-10-08 16:13:30.206299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:37.071 [2024-10-08 16:13:30.206338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.071 [2024-10-08 16:13:30.209562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.071 [2024-10-08 16:13:30.209610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.071 Passthru0 00:05:37.071 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.072 { 00:05:37.072 "name": "Malloc0", 00:05:37.072 "aliases": [ 00:05:37.072 "22280eab-c184-448a-9e02-470f49db58be" 00:05:37.072 ], 00:05:37.072 "product_name": "Malloc disk", 00:05:37.072 "block_size": 512, 00:05:37.072 "num_blocks": 16384, 00:05:37.072 "uuid": "22280eab-c184-448a-9e02-470f49db58be", 00:05:37.072 "assigned_rate_limits": { 00:05:37.072 "rw_ios_per_sec": 0, 00:05:37.072 "rw_mbytes_per_sec": 0, 00:05:37.072 "r_mbytes_per_sec": 0, 00:05:37.072 "w_mbytes_per_sec": 0 00:05:37.072 }, 00:05:37.072 "claimed": true, 00:05:37.072 "claim_type": "exclusive_write", 00:05:37.072 "zoned": false, 00:05:37.072 "supported_io_types": { 00:05:37.072 "read": true, 00:05:37.072 "write": true, 00:05:37.072 "unmap": true, 00:05:37.072 "flush": true, 00:05:37.072 "reset": true, 00:05:37.072 "nvme_admin": false, 00:05:37.072 "nvme_io": false, 00:05:37.072 "nvme_io_md": false, 00:05:37.072 "write_zeroes": true, 00:05:37.072 "zcopy": true, 00:05:37.072 "get_zone_info": false, 00:05:37.072 "zone_management": false, 00:05:37.072 "zone_append": false, 00:05:37.072 "compare": false, 00:05:37.072 "compare_and_write": false, 00:05:37.072 "abort": true, 00:05:37.072 "seek_hole": false, 00:05:37.072 "seek_data": false, 00:05:37.072 "copy": true, 00:05:37.072 "nvme_iov_md": false 00:05:37.072 }, 00:05:37.072 "memory_domains": [ 00:05:37.072 { 00:05:37.072 "dma_device_id": "system", 00:05:37.072 "dma_device_type": 1 00:05:37.072 }, 00:05:37.072 { 00:05:37.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.072 "dma_device_type": 2 00:05:37.072 } 00:05:37.072 ], 00:05:37.072 "driver_specific": {} 00:05:37.072 }, 00:05:37.072 { 00:05:37.072 "name": "Passthru0", 00:05:37.072 "aliases": [ 00:05:37.072 "4f034767-ccc8-5f74-b598-bda9e62bea5c" 00:05:37.072 ], 00:05:37.072 "product_name": "passthru", 00:05:37.072 "block_size": 512, 00:05:37.072 "num_blocks": 16384, 00:05:37.072 "uuid": "4f034767-ccc8-5f74-b598-bda9e62bea5c", 00:05:37.072 "assigned_rate_limits": { 00:05:37.072 "rw_ios_per_sec": 0, 00:05:37.072 "rw_mbytes_per_sec": 0, 00:05:37.072 "r_mbytes_per_sec": 0, 00:05:37.072 "w_mbytes_per_sec": 0 00:05:37.072 }, 00:05:37.072 "claimed": false, 00:05:37.072 "zoned": false, 00:05:37.072 "supported_io_types": { 00:05:37.072 "read": true, 00:05:37.072 "write": true, 00:05:37.072 "unmap": true, 00:05:37.072 "flush": true, 00:05:37.072 "reset": true, 00:05:37.072 "nvme_admin": false, 00:05:37.072 "nvme_io": false, 00:05:37.072 "nvme_io_md": false, 00:05:37.072 "write_zeroes": true, 00:05:37.072 "zcopy": true, 00:05:37.072 "get_zone_info": false, 00:05:37.072 "zone_management": false, 00:05:37.072 "zone_append": false, 00:05:37.072 "compare": false, 00:05:37.072 "compare_and_write": false, 00:05:37.072 "abort": true, 00:05:37.072 "seek_hole": false, 00:05:37.072 "seek_data": false, 00:05:37.072 "copy": true, 00:05:37.072 "nvme_iov_md": false 00:05:37.072 }, 00:05:37.072 "memory_domains": [ 00:05:37.072 { 00:05:37.072 "dma_device_id": "system", 00:05:37.072 "dma_device_type": 1 00:05:37.072 }, 00:05:37.072 { 00:05:37.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.072 "dma_device_type": 2 00:05:37.072 } 00:05:37.072 ], 00:05:37.072 "driver_specific": { 00:05:37.072 "passthru": { 00:05:37.072 "name": "Passthru0", 00:05:37.072 "base_bdev_name": "Malloc0" 00:05:37.072 } 00:05:37.072 } 00:05:37.072 } 00:05:37.072 ]' 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.072 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.330 ************************************ 00:05:37.330 END TEST rpc_integrity 00:05:37.330 ************************************ 00:05:37.330 16:13:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.330 00:05:37.330 real 0m0.373s 00:05:37.330 user 0m0.220s 00:05:37.330 sys 0m0.042s 00:05:37.330 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.330 16:13:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 16:13:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.330 16:13:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.330 16:13:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.330 16:13:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 ************************************ 00:05:37.330 START TEST rpc_plugins 00:05:37.330 ************************************ 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:37.330 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.330 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.330 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.330 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.330 { 00:05:37.331 "name": "Malloc1", 00:05:37.331 "aliases": [ 00:05:37.331 "5d4ea39f-5f1f-4e2f-8b13-808c2e9ce8e8" 00:05:37.331 ], 00:05:37.331 "product_name": "Malloc disk", 00:05:37.331 "block_size": 4096, 00:05:37.331 "num_blocks": 256, 00:05:37.331 "uuid": "5d4ea39f-5f1f-4e2f-8b13-808c2e9ce8e8", 00:05:37.331 "assigned_rate_limits": { 00:05:37.331 "rw_ios_per_sec": 0, 00:05:37.331 "rw_mbytes_per_sec": 0, 00:05:37.331 "r_mbytes_per_sec": 0, 00:05:37.331 "w_mbytes_per_sec": 0 00:05:37.331 }, 00:05:37.331 "claimed": false, 00:05:37.331 "zoned": false, 00:05:37.331 "supported_io_types": { 00:05:37.331 "read": true, 00:05:37.331 "write": true, 00:05:37.331 "unmap": true, 00:05:37.331 "flush": true, 00:05:37.331 "reset": true, 00:05:37.331 "nvme_admin": false, 00:05:37.331 "nvme_io": false, 00:05:37.331 "nvme_io_md": false, 00:05:37.331 "write_zeroes": true, 00:05:37.331 "zcopy": true, 00:05:37.331 "get_zone_info": false, 00:05:37.331 "zone_management": false, 00:05:37.331 "zone_append": false, 00:05:37.331 "compare": false, 00:05:37.331 "compare_and_write": false, 00:05:37.331 "abort": true, 00:05:37.331 "seek_hole": false, 00:05:37.331 "seek_data": false, 00:05:37.331 "copy": true, 00:05:37.331 "nvme_iov_md": false 00:05:37.331 }, 00:05:37.331 "memory_domains": [ 00:05:37.331 { 00:05:37.331 "dma_device_id": "system", 00:05:37.331 "dma_device_type": 1 00:05:37.331 }, 00:05:37.331 { 00:05:37.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.331 "dma_device_type": 2 00:05:37.331 } 00:05:37.331 ], 00:05:37.331 "driver_specific": {} 00:05:37.331 } 00:05:37.331 ]' 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.331 ************************************ 00:05:37.331 END TEST rpc_plugins 00:05:37.331 ************************************ 00:05:37.331 16:13:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.331 00:05:37.331 real 0m0.179s 00:05:37.331 user 0m0.115s 00:05:37.331 sys 0m0.025s 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.331 16:13:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.589 16:13:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.589 16:13:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.590 16:13:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.590 16:13:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.590 ************************************ 00:05:37.590 START TEST rpc_trace_cmd_test 00:05:37.590 ************************************ 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:37.590 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56902", 00:05:37.590 "tpoint_group_mask": "0x8", 00:05:37.590 "iscsi_conn": { 00:05:37.590 "mask": "0x2", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "scsi": { 00:05:37.590 "mask": "0x4", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "bdev": { 00:05:37.590 "mask": "0x8", 00:05:37.590 "tpoint_mask": "0xffffffffffffffff" 00:05:37.590 }, 00:05:37.590 "nvmf_rdma": { 00:05:37.590 "mask": "0x10", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "nvmf_tcp": { 00:05:37.590 "mask": "0x20", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "ftl": { 00:05:37.590 "mask": "0x40", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "blobfs": { 00:05:37.590 "mask": "0x80", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "dsa": { 00:05:37.590 "mask": "0x200", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "thread": { 00:05:37.590 "mask": "0x400", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "nvme_pcie": { 00:05:37.590 "mask": "0x800", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "iaa": { 00:05:37.590 "mask": "0x1000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "nvme_tcp": { 00:05:37.590 "mask": "0x2000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "bdev_nvme": { 00:05:37.590 "mask": "0x4000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "sock": { 00:05:37.590 "mask": "0x8000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "blob": { 00:05:37.590 "mask": "0x10000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "bdev_raid": { 00:05:37.590 "mask": "0x20000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 }, 00:05:37.590 "scheduler": { 00:05:37.590 "mask": "0x40000", 00:05:37.590 "tpoint_mask": "0x0" 00:05:37.590 } 00:05:37.590 }' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:37.590 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:37.848 ************************************ 00:05:37.848 END TEST rpc_trace_cmd_test 00:05:37.848 ************************************ 00:05:37.848 16:13:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:37.848 00:05:37.848 real 0m0.278s 00:05:37.848 user 0m0.239s 00:05:37.848 sys 0m0.028s 00:05:37.848 16:13:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.848 16:13:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.848 16:13:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:37.848 16:13:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:37.849 16:13:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:37.849 16:13:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.849 16:13:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.849 16:13:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.849 ************************************ 00:05:37.849 START TEST rpc_daemon_integrity 00:05:37.849 ************************************ 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.849 { 00:05:37.849 "name": "Malloc2", 00:05:37.849 "aliases": [ 00:05:37.849 "7313ae7f-79c9-446a-bc14-575dac7b07c2" 00:05:37.849 ], 00:05:37.849 "product_name": "Malloc disk", 00:05:37.849 "block_size": 512, 00:05:37.849 "num_blocks": 16384, 00:05:37.849 "uuid": "7313ae7f-79c9-446a-bc14-575dac7b07c2", 00:05:37.849 "assigned_rate_limits": { 00:05:37.849 "rw_ios_per_sec": 0, 00:05:37.849 "rw_mbytes_per_sec": 0, 00:05:37.849 "r_mbytes_per_sec": 0, 00:05:37.849 "w_mbytes_per_sec": 0 00:05:37.849 }, 00:05:37.849 "claimed": false, 00:05:37.849 "zoned": false, 00:05:37.849 "supported_io_types": { 00:05:37.849 "read": true, 00:05:37.849 "write": true, 00:05:37.849 "unmap": true, 00:05:37.849 "flush": true, 00:05:37.849 "reset": true, 00:05:37.849 "nvme_admin": false, 00:05:37.849 "nvme_io": false, 00:05:37.849 "nvme_io_md": false, 00:05:37.849 "write_zeroes": true, 00:05:37.849 "zcopy": true, 00:05:37.849 "get_zone_info": false, 00:05:37.849 "zone_management": false, 00:05:37.849 "zone_append": false, 00:05:37.849 "compare": false, 00:05:37.849 "compare_and_write": false, 00:05:37.849 "abort": true, 00:05:37.849 "seek_hole": false, 00:05:37.849 "seek_data": false, 00:05:37.849 "copy": true, 00:05:37.849 "nvme_iov_md": false 00:05:37.849 }, 00:05:37.849 "memory_domains": [ 00:05:37.849 { 00:05:37.849 "dma_device_id": "system", 00:05:37.849 "dma_device_type": 1 00:05:37.849 }, 00:05:37.849 { 00:05:37.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.849 "dma_device_type": 2 00:05:37.849 } 00:05:37.849 ], 00:05:37.849 "driver_specific": {} 00:05:37.849 } 00:05:37.849 ]' 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.849 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.108 [2024-10-08 16:13:31.171807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.108 [2024-10-08 16:13:31.171878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.108 [2024-10-08 16:13:31.171907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:38.108 [2024-10-08 16:13:31.171925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.108 [2024-10-08 16:13:31.174931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.108 [2024-10-08 16:13:31.175115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.108 Passthru0 00:05:38.108 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.108 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.108 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.108 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.108 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.108 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.108 { 00:05:38.108 "name": "Malloc2", 00:05:38.108 "aliases": [ 00:05:38.108 "7313ae7f-79c9-446a-bc14-575dac7b07c2" 00:05:38.108 ], 00:05:38.108 "product_name": "Malloc disk", 00:05:38.108 "block_size": 512, 00:05:38.108 "num_blocks": 16384, 00:05:38.108 "uuid": "7313ae7f-79c9-446a-bc14-575dac7b07c2", 00:05:38.108 "assigned_rate_limits": { 00:05:38.108 "rw_ios_per_sec": 0, 00:05:38.108 "rw_mbytes_per_sec": 0, 00:05:38.108 "r_mbytes_per_sec": 0, 00:05:38.108 "w_mbytes_per_sec": 0 00:05:38.108 }, 00:05:38.108 "claimed": true, 00:05:38.108 "claim_type": "exclusive_write", 00:05:38.108 "zoned": false, 00:05:38.108 "supported_io_types": { 00:05:38.108 "read": true, 00:05:38.108 "write": true, 00:05:38.108 "unmap": true, 00:05:38.108 "flush": true, 00:05:38.108 "reset": true, 00:05:38.108 "nvme_admin": false, 00:05:38.108 "nvme_io": false, 00:05:38.108 "nvme_io_md": false, 00:05:38.108 "write_zeroes": true, 00:05:38.108 "zcopy": true, 00:05:38.108 "get_zone_info": false, 00:05:38.108 "zone_management": false, 00:05:38.108 "zone_append": false, 00:05:38.108 "compare": false, 00:05:38.108 "compare_and_write": false, 00:05:38.108 "abort": true, 00:05:38.108 "seek_hole": false, 00:05:38.108 "seek_data": false, 00:05:38.108 "copy": true, 00:05:38.108 "nvme_iov_md": false 00:05:38.108 }, 00:05:38.108 "memory_domains": [ 00:05:38.108 { 00:05:38.108 "dma_device_id": "system", 00:05:38.108 "dma_device_type": 1 00:05:38.108 }, 00:05:38.108 { 00:05:38.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.108 "dma_device_type": 2 00:05:38.108 } 00:05:38.108 ], 00:05:38.108 "driver_specific": {} 00:05:38.108 }, 00:05:38.108 { 00:05:38.108 "name": "Passthru0", 00:05:38.108 "aliases": [ 00:05:38.108 "182f9b71-ae49-5510-9d43-659f0619a68f" 00:05:38.108 ], 00:05:38.108 "product_name": "passthru", 00:05:38.108 "block_size": 512, 00:05:38.108 "num_blocks": 16384, 00:05:38.108 "uuid": "182f9b71-ae49-5510-9d43-659f0619a68f", 00:05:38.108 "assigned_rate_limits": { 00:05:38.108 "rw_ios_per_sec": 0, 00:05:38.108 "rw_mbytes_per_sec": 0, 00:05:38.108 "r_mbytes_per_sec": 0, 00:05:38.108 "w_mbytes_per_sec": 0 00:05:38.108 }, 00:05:38.108 "claimed": false, 00:05:38.108 "zoned": false, 00:05:38.108 "supported_io_types": { 00:05:38.108 "read": true, 00:05:38.108 "write": true, 00:05:38.108 "unmap": true, 00:05:38.108 "flush": true, 00:05:38.108 "reset": true, 00:05:38.108 "nvme_admin": false, 00:05:38.108 "nvme_io": false, 00:05:38.108 "nvme_io_md": false, 00:05:38.108 "write_zeroes": true, 00:05:38.108 "zcopy": true, 00:05:38.108 "get_zone_info": false, 00:05:38.108 "zone_management": false, 00:05:38.108 "zone_append": false, 00:05:38.108 "compare": false, 00:05:38.108 "compare_and_write": false, 00:05:38.108 "abort": true, 00:05:38.108 "seek_hole": false, 00:05:38.108 "seek_data": false, 00:05:38.108 "copy": true, 00:05:38.108 "nvme_iov_md": false 00:05:38.108 }, 00:05:38.108 "memory_domains": [ 00:05:38.108 { 00:05:38.108 "dma_device_id": "system", 00:05:38.108 "dma_device_type": 1 00:05:38.108 }, 00:05:38.108 { 00:05:38.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.108 "dma_device_type": 2 00:05:38.108 } 00:05:38.108 ], 00:05:38.108 "driver_specific": { 00:05:38.108 "passthru": { 00:05:38.108 "name": "Passthru0", 00:05:38.109 "base_bdev_name": "Malloc2" 00:05:38.109 } 00:05:38.109 } 00:05:38.109 } 00:05:38.109 ]' 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.109 ************************************ 00:05:38.109 END TEST rpc_daemon_integrity 00:05:38.109 ************************************ 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.109 00:05:38.109 real 0m0.355s 00:05:38.109 user 0m0.218s 00:05:38.109 sys 0m0.040s 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.109 16:13:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.109 16:13:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.109 16:13:31 rpc -- rpc/rpc.sh@84 -- # killprocess 56902 00:05:38.109 16:13:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 56902 ']' 00:05:38.109 16:13:31 rpc -- common/autotest_common.sh@954 -- # kill -0 56902 00:05:38.109 16:13:31 rpc -- common/autotest_common.sh@955 -- # uname 00:05:38.109 16:13:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.109 16:13:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56902 00:05:38.367 killing process with pid 56902 00:05:38.367 16:13:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.367 16:13:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.367 16:13:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56902' 00:05:38.367 16:13:31 rpc -- common/autotest_common.sh@969 -- # kill 56902 00:05:38.367 16:13:31 rpc -- common/autotest_common.sh@974 -- # wait 56902 00:05:40.898 ************************************ 00:05:40.898 END TEST rpc 00:05:40.898 ************************************ 00:05:40.898 00:05:40.898 real 0m5.555s 00:05:40.898 user 0m6.193s 00:05:40.898 sys 0m0.970s 00:05:40.898 16:13:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.898 16:13:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.898 16:13:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.899 16:13:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.899 16:13:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.899 16:13:33 -- common/autotest_common.sh@10 -- # set +x 00:05:40.899 ************************************ 00:05:40.899 START TEST skip_rpc 00:05:40.899 ************************************ 00:05:40.899 16:13:33 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.899 * Looking for test storage... 00:05:40.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.899 16:13:33 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.899 16:13:33 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.899 16:13:33 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.899 16:13:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.899 --rc genhtml_branch_coverage=1 00:05:40.899 --rc genhtml_function_coverage=1 00:05:40.899 --rc genhtml_legend=1 00:05:40.899 --rc geninfo_all_blocks=1 00:05:40.899 --rc geninfo_unexecuted_blocks=1 00:05:40.899 00:05:40.899 ' 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.899 --rc genhtml_branch_coverage=1 00:05:40.899 --rc genhtml_function_coverage=1 00:05:40.899 --rc genhtml_legend=1 00:05:40.899 --rc geninfo_all_blocks=1 00:05:40.899 --rc geninfo_unexecuted_blocks=1 00:05:40.899 00:05:40.899 ' 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.899 --rc genhtml_branch_coverage=1 00:05:40.899 --rc genhtml_function_coverage=1 00:05:40.899 --rc genhtml_legend=1 00:05:40.899 --rc geninfo_all_blocks=1 00:05:40.899 --rc geninfo_unexecuted_blocks=1 00:05:40.899 00:05:40.899 ' 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.899 --rc genhtml_branch_coverage=1 00:05:40.899 --rc genhtml_function_coverage=1 00:05:40.899 --rc genhtml_legend=1 00:05:40.899 --rc geninfo_all_blocks=1 00:05:40.899 --rc geninfo_unexecuted_blocks=1 00:05:40.899 00:05:40.899 ' 00:05:40.899 16:13:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.899 16:13:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.899 16:13:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.899 16:13:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.899 ************************************ 00:05:40.899 START TEST skip_rpc 00:05:40.899 ************************************ 00:05:40.899 16:13:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:40.899 16:13:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57137 00:05:40.899 16:13:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.899 16:13:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.899 16:13:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.899 [2024-10-08 16:13:34.193906] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:40.899 [2024-10-08 16:13:34.194332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:05:41.157 [2024-10-08 16:13:34.377392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.415 [2024-10-08 16:13:34.672599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57137 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57137 ']' 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57137 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57137 00:05:46.677 killing process with pid 57137 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57137' 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57137 00:05:46.677 16:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57137 00:05:48.589 00:05:48.589 real 0m7.740s 00:05:48.589 user 0m7.027s 00:05:48.589 sys 0m0.610s 00:05:48.589 ************************************ 00:05:48.589 END TEST skip_rpc 00:05:48.589 ************************************ 00:05:48.589 16:13:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.589 16:13:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.589 16:13:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.589 16:13:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.589 16:13:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.589 16:13:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.589 ************************************ 00:05:48.589 START TEST skip_rpc_with_json 00:05:48.589 ************************************ 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57252 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57252 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57252 ']' 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.589 16:13:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.848 [2024-10-08 16:13:41.997169] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:48.848 [2024-10-08 16:13:41.997388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57252 ] 00:05:49.107 [2024-10-08 16:13:42.179671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.365 [2024-10-08 16:13:42.493894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.300 [2024-10-08 16:13:43.458788] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.300 request: 00:05:50.300 { 00:05:50.300 "trtype": "tcp", 00:05:50.300 "method": "nvmf_get_transports", 00:05:50.300 "req_id": 1 00:05:50.300 } 00:05:50.300 Got JSON-RPC error response 00:05:50.300 response: 00:05:50.300 { 00:05:50.300 "code": -19, 00:05:50.300 "message": "No such device" 00:05:50.300 } 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.300 [2024-10-08 16:13:43.470890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.300 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.560 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.560 16:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.560 { 00:05:50.560 "subsystems": [ 00:05:50.560 { 00:05:50.560 "subsystem": "fsdev", 00:05:50.560 "config": [ 00:05:50.560 { 00:05:50.560 "method": "fsdev_set_opts", 00:05:50.560 "params": { 00:05:50.560 "fsdev_io_pool_size": 65535, 00:05:50.560 "fsdev_io_cache_size": 256 00:05:50.560 } 00:05:50.560 } 00:05:50.560 ] 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "subsystem": "keyring", 00:05:50.560 "config": [] 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "subsystem": "iobuf", 00:05:50.560 "config": [ 00:05:50.560 { 00:05:50.560 "method": "iobuf_set_options", 00:05:50.560 "params": { 00:05:50.560 "small_pool_count": 8192, 00:05:50.560 "large_pool_count": 1024, 00:05:50.560 "small_bufsize": 8192, 00:05:50.560 "large_bufsize": 135168 00:05:50.560 } 00:05:50.560 } 00:05:50.560 ] 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "subsystem": "sock", 00:05:50.560 "config": [ 00:05:50.560 { 00:05:50.560 "method": "sock_set_default_impl", 00:05:50.560 "params": { 00:05:50.560 "impl_name": "posix" 00:05:50.560 } 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "method": "sock_impl_set_options", 00:05:50.560 "params": { 00:05:50.560 "impl_name": "ssl", 00:05:50.560 "recv_buf_size": 4096, 00:05:50.560 "send_buf_size": 4096, 00:05:50.560 "enable_recv_pipe": true, 00:05:50.560 "enable_quickack": false, 00:05:50.560 "enable_placement_id": 0, 00:05:50.560 "enable_zerocopy_send_server": true, 00:05:50.560 "enable_zerocopy_send_client": false, 00:05:50.560 "zerocopy_threshold": 0, 00:05:50.560 "tls_version": 0, 00:05:50.560 "enable_ktls": false 00:05:50.560 } 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "method": "sock_impl_set_options", 00:05:50.560 "params": { 00:05:50.560 "impl_name": "posix", 00:05:50.560 "recv_buf_size": 2097152, 00:05:50.560 "send_buf_size": 2097152, 00:05:50.560 "enable_recv_pipe": true, 00:05:50.560 "enable_quickack": false, 00:05:50.560 "enable_placement_id": 0, 00:05:50.560 "enable_zerocopy_send_server": true, 00:05:50.560 "enable_zerocopy_send_client": false, 00:05:50.560 "zerocopy_threshold": 0, 00:05:50.560 "tls_version": 0, 00:05:50.560 "enable_ktls": false 00:05:50.560 } 00:05:50.560 } 00:05:50.560 ] 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "subsystem": "vmd", 00:05:50.560 "config": [] 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "subsystem": "accel", 00:05:50.560 "config": [ 00:05:50.560 { 00:05:50.560 "method": "accel_set_options", 00:05:50.560 "params": { 00:05:50.560 "small_cache_size": 128, 00:05:50.560 "large_cache_size": 16, 00:05:50.560 "task_count": 2048, 00:05:50.560 "sequence_count": 2048, 00:05:50.560 "buf_count": 2048 00:05:50.560 } 00:05:50.560 } 00:05:50.560 ] 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "subsystem": "bdev", 00:05:50.560 "config": [ 00:05:50.560 { 00:05:50.560 "method": "bdev_set_options", 00:05:50.560 "params": { 00:05:50.560 "bdev_io_pool_size": 65535, 00:05:50.560 "bdev_io_cache_size": 256, 00:05:50.560 "bdev_auto_examine": true, 00:05:50.560 "iobuf_small_cache_size": 128, 00:05:50.560 "iobuf_large_cache_size": 16 00:05:50.560 } 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "method": "bdev_raid_set_options", 00:05:50.560 "params": { 00:05:50.560 "process_window_size_kb": 1024, 00:05:50.560 "process_max_bandwidth_mb_sec": 0 00:05:50.560 } 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "method": "bdev_iscsi_set_options", 00:05:50.560 "params": { 00:05:50.560 "timeout_sec": 30 00:05:50.560 } 00:05:50.560 }, 00:05:50.560 { 00:05:50.560 "method": "bdev_nvme_set_options", 00:05:50.560 "params": { 00:05:50.560 "action_on_timeout": "none", 00:05:50.560 "timeout_us": 0, 00:05:50.560 "timeout_admin_us": 0, 00:05:50.560 "keep_alive_timeout_ms": 10000, 00:05:50.560 "arbitration_burst": 0, 00:05:50.560 "low_priority_weight": 0, 00:05:50.560 "medium_priority_weight": 0, 00:05:50.560 "high_priority_weight": 0, 00:05:50.560 "nvme_adminq_poll_period_us": 10000, 00:05:50.560 "nvme_ioq_poll_period_us": 0, 00:05:50.560 "io_queue_requests": 0, 00:05:50.560 "delay_cmd_submit": true, 00:05:50.560 "transport_retry_count": 4, 00:05:50.560 "bdev_retry_count": 3, 00:05:50.560 "transport_ack_timeout": 0, 00:05:50.560 "ctrlr_loss_timeout_sec": 0, 00:05:50.560 "reconnect_delay_sec": 0, 00:05:50.560 "fast_io_fail_timeout_sec": 0, 00:05:50.560 "disable_auto_failback": false, 00:05:50.560 "generate_uuids": false, 00:05:50.560 "transport_tos": 0, 00:05:50.560 "nvme_error_stat": false, 00:05:50.560 "rdma_srq_size": 0, 00:05:50.560 "io_path_stat": false, 00:05:50.560 "allow_accel_sequence": false, 00:05:50.560 "rdma_max_cq_size": 0, 00:05:50.560 "rdma_cm_event_timeout_ms": 0, 00:05:50.560 "dhchap_digests": [ 00:05:50.560 "sha256", 00:05:50.560 "sha384", 00:05:50.560 "sha512" 00:05:50.560 ], 00:05:50.560 "dhchap_dhgroups": [ 00:05:50.560 "null", 00:05:50.560 "ffdhe2048", 00:05:50.560 "ffdhe3072", 00:05:50.560 "ffdhe4096", 00:05:50.560 "ffdhe6144", 00:05:50.560 "ffdhe8192" 00:05:50.561 ] 00:05:50.561 } 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "method": "bdev_nvme_set_hotplug", 00:05:50.561 "params": { 00:05:50.561 "period_us": 100000, 00:05:50.561 "enable": false 00:05:50.561 } 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "method": "bdev_wait_for_examine" 00:05:50.561 } 00:05:50.561 ] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "scsi", 00:05:50.561 "config": null 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "scheduler", 00:05:50.561 "config": [ 00:05:50.561 { 00:05:50.561 "method": "framework_set_scheduler", 00:05:50.561 "params": { 00:05:50.561 "name": "static" 00:05:50.561 } 00:05:50.561 } 00:05:50.561 ] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "vhost_scsi", 00:05:50.561 "config": [] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "vhost_blk", 00:05:50.561 "config": [] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "ublk", 00:05:50.561 "config": [] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "nbd", 00:05:50.561 "config": [] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "nvmf", 00:05:50.561 "config": [ 00:05:50.561 { 00:05:50.561 "method": "nvmf_set_config", 00:05:50.561 "params": { 00:05:50.561 "discovery_filter": "match_any", 00:05:50.561 "admin_cmd_passthru": { 00:05:50.561 "identify_ctrlr": false 00:05:50.561 }, 00:05:50.561 "dhchap_digests": [ 00:05:50.561 "sha256", 00:05:50.561 "sha384", 00:05:50.561 "sha512" 00:05:50.561 ], 00:05:50.561 "dhchap_dhgroups": [ 00:05:50.561 "null", 00:05:50.561 "ffdhe2048", 00:05:50.561 "ffdhe3072", 00:05:50.561 "ffdhe4096", 00:05:50.561 "ffdhe6144", 00:05:50.561 "ffdhe8192" 00:05:50.561 ] 00:05:50.561 } 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "method": "nvmf_set_max_subsystems", 00:05:50.561 "params": { 00:05:50.561 "max_subsystems": 1024 00:05:50.561 } 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "method": "nvmf_set_crdt", 00:05:50.561 "params": { 00:05:50.561 "crdt1": 0, 00:05:50.561 "crdt2": 0, 00:05:50.561 "crdt3": 0 00:05:50.561 } 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "method": "nvmf_create_transport", 00:05:50.561 "params": { 00:05:50.561 "trtype": "TCP", 00:05:50.561 "max_queue_depth": 128, 00:05:50.561 "max_io_qpairs_per_ctrlr": 127, 00:05:50.561 "in_capsule_data_size": 4096, 00:05:50.561 "max_io_size": 131072, 00:05:50.561 "io_unit_size": 131072, 00:05:50.561 "max_aq_depth": 128, 00:05:50.561 "num_shared_buffers": 511, 00:05:50.561 "buf_cache_size": 4294967295, 00:05:50.561 "dif_insert_or_strip": false, 00:05:50.561 "zcopy": false, 00:05:50.561 "c2h_success": true, 00:05:50.561 "sock_priority": 0, 00:05:50.561 "abort_timeout_sec": 1, 00:05:50.561 "ack_timeout": 0, 00:05:50.561 "data_wr_pool_size": 0 00:05:50.561 } 00:05:50.561 } 00:05:50.561 ] 00:05:50.561 }, 00:05:50.561 { 00:05:50.561 "subsystem": "iscsi", 00:05:50.561 "config": [ 00:05:50.561 { 00:05:50.561 "method": "iscsi_set_options", 00:05:50.561 "params": { 00:05:50.561 "node_base": "iqn.2016-06.io.spdk", 00:05:50.561 "max_sessions": 128, 00:05:50.561 "max_connections_per_session": 2, 00:05:50.561 "max_queue_depth": 64, 00:05:50.561 "default_time2wait": 2, 00:05:50.561 "default_time2retain": 20, 00:05:50.561 "first_burst_length": 8192, 00:05:50.561 "immediate_data": true, 00:05:50.561 "allow_duplicated_isid": false, 00:05:50.561 "error_recovery_level": 0, 00:05:50.561 "nop_timeout": 60, 00:05:50.561 "nop_in_interval": 30, 00:05:50.561 "disable_chap": false, 00:05:50.561 "require_chap": false, 00:05:50.561 "mutual_chap": false, 00:05:50.561 "chap_group": 0, 00:05:50.561 "max_large_datain_per_connection": 64, 00:05:50.561 "max_r2t_per_connection": 4, 00:05:50.561 "pdu_pool_size": 36864, 00:05:50.561 "immediate_data_pool_size": 16384, 00:05:50.561 "data_out_pool_size": 2048 00:05:50.561 } 00:05:50.561 } 00:05:50.561 ] 00:05:50.561 } 00:05:50.561 ] 00:05:50.561 } 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57252 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57252 ']' 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57252 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57252 00:05:50.561 killing process with pid 57252 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57252' 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57252 00:05:50.561 16:13:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57252 00:05:53.096 16:13:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57308 00:05:53.096 16:13:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.096 16:13:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57308 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57308 ']' 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57308 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57308 00:05:58.378 killing process with pid 57308 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57308' 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57308 00:05:58.378 16:13:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57308 00:06:00.908 16:13:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.908 16:13:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.908 ************************************ 00:06:00.908 END TEST skip_rpc_with_json 00:06:00.908 ************************************ 00:06:00.908 00:06:00.908 real 0m12.018s 00:06:00.908 user 0m11.217s 00:06:00.908 sys 0m1.253s 00:06:00.908 16:13:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.908 16:13:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.908 16:13:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.908 16:13:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.908 16:13:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.908 16:13:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.908 ************************************ 00:06:00.909 START TEST skip_rpc_with_delay 00:06:00.909 ************************************ 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.909 16:13:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.909 [2024-10-08 16:13:54.073239] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.909 [2024-10-08 16:13:54.073477] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:00.909 16:13:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:00.909 16:13:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.909 16:13:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.909 16:13:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.909 00:06:00.909 real 0m0.213s 00:06:00.909 user 0m0.112s 00:06:00.909 sys 0m0.097s 00:06:00.909 ************************************ 00:06:00.909 END TEST skip_rpc_with_delay 00:06:00.909 ************************************ 00:06:00.909 16:13:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.909 16:13:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.909 16:13:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.909 16:13:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.909 16:13:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.909 16:13:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.909 16:13:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.909 16:13:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.909 ************************************ 00:06:00.909 START TEST exit_on_failed_rpc_init 00:06:00.909 ************************************ 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57447 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57447 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57447 ']' 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.909 16:13:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.167 [2024-10-08 16:13:54.327898] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:01.167 [2024-10-08 16:13:54.328091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57447 ] 00:06:01.424 [2024-10-08 16:13:54.504745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.682 [2024-10-08 16:13:54.766209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.630 16:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.630 [2024-10-08 16:13:55.768878] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:02.630 [2024-10-08 16:13:55.769101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57465 ] 00:06:02.630 [2024-10-08 16:13:55.950814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.197 [2024-10-08 16:13:56.257004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.197 [2024-10-08 16:13:56.257160] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.197 [2024-10-08 16:13:56.257185] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.197 [2024-10-08 16:13:56.257204] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57447 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57447 ']' 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57447 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.455 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57447 00:06:03.714 killing process with pid 57447 00:06:03.714 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.714 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.714 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57447' 00:06:03.714 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57447 00:06:03.714 16:13:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57447 00:06:06.246 ************************************ 00:06:06.246 END TEST exit_on_failed_rpc_init 00:06:06.247 ************************************ 00:06:06.247 00:06:06.247 real 0m4.954s 00:06:06.247 user 0m5.758s 00:06:06.247 sys 0m0.737s 00:06:06.247 16:13:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.247 16:13:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.247 16:13:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.247 00:06:06.247 real 0m25.331s 00:06:06.247 user 0m24.289s 00:06:06.247 sys 0m2.913s 00:06:06.247 ************************************ 00:06:06.247 END TEST skip_rpc 00:06:06.247 ************************************ 00:06:06.247 16:13:59 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.247 16:13:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.247 16:13:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.247 16:13:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.247 16:13:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.247 16:13:59 -- common/autotest_common.sh@10 -- # set +x 00:06:06.247 ************************************ 00:06:06.247 START TEST rpc_client 00:06:06.247 ************************************ 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.247 * Looking for test storage... 00:06:06.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.247 16:13:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.247 --rc genhtml_branch_coverage=1 00:06:06.247 --rc genhtml_function_coverage=1 00:06:06.247 --rc genhtml_legend=1 00:06:06.247 --rc geninfo_all_blocks=1 00:06:06.247 --rc geninfo_unexecuted_blocks=1 00:06:06.247 00:06:06.247 ' 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.247 --rc genhtml_branch_coverage=1 00:06:06.247 --rc genhtml_function_coverage=1 00:06:06.247 --rc genhtml_legend=1 00:06:06.247 --rc geninfo_all_blocks=1 00:06:06.247 --rc geninfo_unexecuted_blocks=1 00:06:06.247 00:06:06.247 ' 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.247 --rc genhtml_branch_coverage=1 00:06:06.247 --rc genhtml_function_coverage=1 00:06:06.247 --rc genhtml_legend=1 00:06:06.247 --rc geninfo_all_blocks=1 00:06:06.247 --rc geninfo_unexecuted_blocks=1 00:06:06.247 00:06:06.247 ' 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.247 --rc genhtml_branch_coverage=1 00:06:06.247 --rc genhtml_function_coverage=1 00:06:06.247 --rc genhtml_legend=1 00:06:06.247 --rc geninfo_all_blocks=1 00:06:06.247 --rc geninfo_unexecuted_blocks=1 00:06:06.247 00:06:06.247 ' 00:06:06.247 16:13:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:06.247 OK 00:06:06.247 16:13:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.247 ************************************ 00:06:06.247 END TEST rpc_client 00:06:06.247 ************************************ 00:06:06.247 00:06:06.247 real 0m0.271s 00:06:06.247 user 0m0.163s 00:06:06.247 sys 0m0.120s 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.247 16:13:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:06.247 16:13:59 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.247 16:13:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.247 16:13:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.247 16:13:59 -- common/autotest_common.sh@10 -- # set +x 00:06:06.506 ************************************ 00:06:06.506 START TEST json_config 00:06:06.506 ************************************ 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.506 16:13:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.506 16:13:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.506 16:13:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.506 16:13:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.506 16:13:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.506 16:13:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:06.506 16:13:59 json_config -- scripts/common.sh@345 -- # : 1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.506 16:13:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.506 16:13:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@353 -- # local d=1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.506 16:13:59 json_config -- scripts/common.sh@355 -- # echo 1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.506 16:13:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@353 -- # local d=2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.506 16:13:59 json_config -- scripts/common.sh@355 -- # echo 2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.506 16:13:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.506 16:13:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.506 16:13:59 json_config -- scripts/common.sh@368 -- # return 0 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.506 --rc genhtml_branch_coverage=1 00:06:06.506 --rc genhtml_function_coverage=1 00:06:06.506 --rc genhtml_legend=1 00:06:06.506 --rc geninfo_all_blocks=1 00:06:06.506 --rc geninfo_unexecuted_blocks=1 00:06:06.506 00:06:06.506 ' 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.506 --rc genhtml_branch_coverage=1 00:06:06.506 --rc genhtml_function_coverage=1 00:06:06.506 --rc genhtml_legend=1 00:06:06.506 --rc geninfo_all_blocks=1 00:06:06.506 --rc geninfo_unexecuted_blocks=1 00:06:06.506 00:06:06.506 ' 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.506 --rc genhtml_branch_coverage=1 00:06:06.506 --rc genhtml_function_coverage=1 00:06:06.506 --rc genhtml_legend=1 00:06:06.506 --rc geninfo_all_blocks=1 00:06:06.506 --rc geninfo_unexecuted_blocks=1 00:06:06.506 00:06:06.506 ' 00:06:06.506 16:13:59 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.506 --rc genhtml_branch_coverage=1 00:06:06.506 --rc genhtml_function_coverage=1 00:06:06.506 --rc genhtml_legend=1 00:06:06.506 --rc geninfo_all_blocks=1 00:06:06.506 --rc geninfo_unexecuted_blocks=1 00:06:06.506 00:06:06.506 ' 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1dd54b7-14e1-4b3b-9dae-e96f98659366 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f1dd54b7-14e1-4b3b-9dae-e96f98659366 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.507 16:13:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.507 16:13:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.507 16:13:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.507 16:13:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.507 16:13:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.507 16:13:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.507 16:13:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.507 16:13:59 json_config -- paths/export.sh@5 -- # export PATH 00:06:06.507 16:13:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@51 -- # : 0 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.507 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.507 16:13:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.507 WARNING: No tests are enabled so not running JSON configuration tests 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:06.507 16:13:59 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:06.507 00:06:06.507 real 0m0.191s 00:06:06.507 user 0m0.127s 00:06:06.507 sys 0m0.067s 00:06:06.507 16:13:59 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.507 16:13:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.507 ************************************ 00:06:06.507 END TEST json_config 00:06:06.507 ************************************ 00:06:06.507 16:13:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.507 16:13:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.507 16:13:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.507 16:13:59 -- common/autotest_common.sh@10 -- # set +x 00:06:06.507 ************************************ 00:06:06.507 START TEST json_config_extra_key 00:06:06.507 ************************************ 00:06:06.507 16:13:59 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.767 16:13:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.767 --rc genhtml_branch_coverage=1 00:06:06.767 --rc genhtml_function_coverage=1 00:06:06.767 --rc genhtml_legend=1 00:06:06.767 --rc geninfo_all_blocks=1 00:06:06.767 --rc geninfo_unexecuted_blocks=1 00:06:06.767 00:06:06.767 ' 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.767 --rc genhtml_branch_coverage=1 00:06:06.767 --rc genhtml_function_coverage=1 00:06:06.767 --rc genhtml_legend=1 00:06:06.767 --rc geninfo_all_blocks=1 00:06:06.767 --rc geninfo_unexecuted_blocks=1 00:06:06.767 00:06:06.767 ' 00:06:06.767 16:13:59 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.767 --rc genhtml_branch_coverage=1 00:06:06.767 --rc genhtml_function_coverage=1 00:06:06.767 --rc genhtml_legend=1 00:06:06.767 --rc geninfo_all_blocks=1 00:06:06.767 --rc geninfo_unexecuted_blocks=1 00:06:06.767 00:06:06.767 ' 00:06:06.768 16:13:59 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.768 --rc genhtml_branch_coverage=1 00:06:06.768 --rc genhtml_function_coverage=1 00:06:06.768 --rc genhtml_legend=1 00:06:06.768 --rc geninfo_all_blocks=1 00:06:06.768 --rc geninfo_unexecuted_blocks=1 00:06:06.768 00:06:06.768 ' 00:06:06.768 16:13:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1dd54b7-14e1-4b3b-9dae-e96f98659366 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f1dd54b7-14e1-4b3b-9dae-e96f98659366 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.768 16:13:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.768 16:13:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.768 16:14:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.768 16:14:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.768 16:14:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.768 16:14:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.768 16:14:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.768 16:14:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.768 16:14:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.768 16:14:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.768 16:14:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.768 INFO: launching applications... 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.768 16:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57675 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.768 Waiting for target to run... 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.768 16:14:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57675 /var/tmp/spdk_tgt.sock 00:06:06.768 16:14:00 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57675 ']' 00:06:06.768 16:14:00 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.768 16:14:00 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.768 16:14:00 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.768 16:14:00 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.768 16:14:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 [2024-10-08 16:14:00.140432] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:07.027 [2024-10-08 16:14:00.140944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57675 ] 00:06:07.285 [2024-10-08 16:14:00.598688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.544 [2024-10-08 16:14:00.835146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.478 00:06:08.478 INFO: shutting down applications... 00:06:08.478 16:14:01 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.478 16:14:01 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.478 16:14:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.478 16:14:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57675 ]] 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57675 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:08.478 16:14:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.735 16:14:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.735 16:14:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.735 16:14:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:08.735 16:14:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.302 16:14:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.302 16:14:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.302 16:14:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:09.302 16:14:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.868 16:14:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.868 16:14:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.868 16:14:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:09.868 16:14:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.435 16:14:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.435 16:14:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.435 16:14:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:10.435 16:14:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.000 16:14:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.000 16:14:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.000 16:14:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:11.000 16:14:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57675 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.258 SPDK target shutdown done 00:06:11.258 Success 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.258 16:14:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.258 16:14:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.258 ************************************ 00:06:11.258 END TEST json_config_extra_key 00:06:11.258 ************************************ 00:06:11.258 00:06:11.258 real 0m4.721s 00:06:11.258 user 0m4.236s 00:06:11.258 sys 0m0.644s 00:06:11.258 16:14:04 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.258 16:14:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 16:14:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.516 16:14:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.516 16:14:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.516 16:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 ************************************ 00:06:11.516 START TEST alias_rpc 00:06:11.516 ************************************ 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.516 * Looking for test storage... 00:06:11.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.516 16:14:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.516 --rc genhtml_branch_coverage=1 00:06:11.516 --rc genhtml_function_coverage=1 00:06:11.516 --rc genhtml_legend=1 00:06:11.516 --rc geninfo_all_blocks=1 00:06:11.516 --rc geninfo_unexecuted_blocks=1 00:06:11.516 00:06:11.516 ' 00:06:11.516 16:14:04 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.517 --rc genhtml_branch_coverage=1 00:06:11.517 --rc genhtml_function_coverage=1 00:06:11.517 --rc genhtml_legend=1 00:06:11.517 --rc geninfo_all_blocks=1 00:06:11.517 --rc geninfo_unexecuted_blocks=1 00:06:11.517 00:06:11.517 ' 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.517 --rc genhtml_branch_coverage=1 00:06:11.517 --rc genhtml_function_coverage=1 00:06:11.517 --rc genhtml_legend=1 00:06:11.517 --rc geninfo_all_blocks=1 00:06:11.517 --rc geninfo_unexecuted_blocks=1 00:06:11.517 00:06:11.517 ' 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.517 --rc genhtml_branch_coverage=1 00:06:11.517 --rc genhtml_function_coverage=1 00:06:11.517 --rc genhtml_legend=1 00:06:11.517 --rc geninfo_all_blocks=1 00:06:11.517 --rc geninfo_unexecuted_blocks=1 00:06:11.517 00:06:11.517 ' 00:06:11.517 16:14:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.517 16:14:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57792 00:06:11.517 16:14:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.517 16:14:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57792 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57792 ']' 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.517 16:14:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.775 [2024-10-08 16:14:04.922263] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:11.775 [2024-10-08 16:14:04.922538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:06:12.033 [2024-10-08 16:14:05.111017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.291 [2024-10-08 16:14:05.369577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.225 16:14:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:13.225 16:14:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57792 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57792 ']' 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57792 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57792 00:06:13.225 killing process with pid 57792 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57792' 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@969 -- # kill 57792 00:06:13.225 16:14:06 alias_rpc -- common/autotest_common.sh@974 -- # wait 57792 00:06:15.753 ************************************ 00:06:15.753 END TEST alias_rpc 00:06:15.753 ************************************ 00:06:15.753 00:06:15.753 real 0m4.329s 00:06:15.753 user 0m4.378s 00:06:15.753 sys 0m0.670s 00:06:15.753 16:14:08 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.753 16:14:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.753 16:14:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:15.753 16:14:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:15.753 16:14:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.753 16:14:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.753 16:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:15.753 ************************************ 00:06:15.753 START TEST spdkcli_tcp 00:06:15.753 ************************************ 00:06:15.753 16:14:08 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:15.753 * Looking for test storage... 00:06:15.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:15.753 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.753 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.753 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.011 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.011 16:14:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.012 16:14:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.012 --rc genhtml_branch_coverage=1 00:06:16.012 --rc genhtml_function_coverage=1 00:06:16.012 --rc genhtml_legend=1 00:06:16.012 --rc geninfo_all_blocks=1 00:06:16.012 --rc geninfo_unexecuted_blocks=1 00:06:16.012 00:06:16.012 ' 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.012 --rc genhtml_branch_coverage=1 00:06:16.012 --rc genhtml_function_coverage=1 00:06:16.012 --rc genhtml_legend=1 00:06:16.012 --rc geninfo_all_blocks=1 00:06:16.012 --rc geninfo_unexecuted_blocks=1 00:06:16.012 00:06:16.012 ' 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.012 --rc genhtml_branch_coverage=1 00:06:16.012 --rc genhtml_function_coverage=1 00:06:16.012 --rc genhtml_legend=1 00:06:16.012 --rc geninfo_all_blocks=1 00:06:16.012 --rc geninfo_unexecuted_blocks=1 00:06:16.012 00:06:16.012 ' 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.012 --rc genhtml_branch_coverage=1 00:06:16.012 --rc genhtml_function_coverage=1 00:06:16.012 --rc genhtml_legend=1 00:06:16.012 --rc geninfo_all_blocks=1 00:06:16.012 --rc geninfo_unexecuted_blocks=1 00:06:16.012 00:06:16.012 ' 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57899 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57899 00:06:16.012 16:14:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57899 ']' 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.012 16:14:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.012 [2024-10-08 16:14:09.290924] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:16.012 [2024-10-08 16:14:09.291297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57899 ] 00:06:16.270 [2024-10-08 16:14:09.452137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.527 [2024-10-08 16:14:09.692643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.528 [2024-10-08 16:14:09.692684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.463 16:14:10 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.463 16:14:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:17.463 16:14:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57922 00:06:17.463 16:14:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.463 16:14:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.722 [ 00:06:17.722 "bdev_malloc_delete", 00:06:17.722 "bdev_malloc_create", 00:06:17.722 "bdev_null_resize", 00:06:17.722 "bdev_null_delete", 00:06:17.722 "bdev_null_create", 00:06:17.722 "bdev_nvme_cuse_unregister", 00:06:17.722 "bdev_nvme_cuse_register", 00:06:17.722 "bdev_opal_new_user", 00:06:17.722 "bdev_opal_set_lock_state", 00:06:17.722 "bdev_opal_delete", 00:06:17.722 "bdev_opal_get_info", 00:06:17.722 "bdev_opal_create", 00:06:17.722 "bdev_nvme_opal_revert", 00:06:17.722 "bdev_nvme_opal_init", 00:06:17.722 "bdev_nvme_send_cmd", 00:06:17.722 "bdev_nvme_set_keys", 00:06:17.722 "bdev_nvme_get_path_iostat", 00:06:17.722 "bdev_nvme_get_mdns_discovery_info", 00:06:17.722 "bdev_nvme_stop_mdns_discovery", 00:06:17.722 "bdev_nvme_start_mdns_discovery", 00:06:17.722 "bdev_nvme_set_multipath_policy", 00:06:17.722 "bdev_nvme_set_preferred_path", 00:06:17.722 "bdev_nvme_get_io_paths", 00:06:17.722 "bdev_nvme_remove_error_injection", 00:06:17.722 "bdev_nvme_add_error_injection", 00:06:17.722 "bdev_nvme_get_discovery_info", 00:06:17.722 "bdev_nvme_stop_discovery", 00:06:17.722 "bdev_nvme_start_discovery", 00:06:17.722 "bdev_nvme_get_controller_health_info", 00:06:17.722 "bdev_nvme_disable_controller", 00:06:17.722 "bdev_nvme_enable_controller", 00:06:17.722 "bdev_nvme_reset_controller", 00:06:17.722 "bdev_nvme_get_transport_statistics", 00:06:17.722 "bdev_nvme_apply_firmware", 00:06:17.722 "bdev_nvme_detach_controller", 00:06:17.722 "bdev_nvme_get_controllers", 00:06:17.722 "bdev_nvme_attach_controller", 00:06:17.722 "bdev_nvme_set_hotplug", 00:06:17.722 "bdev_nvme_set_options", 00:06:17.722 "bdev_passthru_delete", 00:06:17.722 "bdev_passthru_create", 00:06:17.722 "bdev_lvol_set_parent_bdev", 00:06:17.722 "bdev_lvol_set_parent", 00:06:17.722 "bdev_lvol_check_shallow_copy", 00:06:17.722 "bdev_lvol_start_shallow_copy", 00:06:17.722 "bdev_lvol_grow_lvstore", 00:06:17.722 "bdev_lvol_get_lvols", 00:06:17.722 "bdev_lvol_get_lvstores", 00:06:17.722 "bdev_lvol_delete", 00:06:17.722 "bdev_lvol_set_read_only", 00:06:17.722 "bdev_lvol_resize", 00:06:17.722 "bdev_lvol_decouple_parent", 00:06:17.722 "bdev_lvol_inflate", 00:06:17.722 "bdev_lvol_rename", 00:06:17.722 "bdev_lvol_clone_bdev", 00:06:17.722 "bdev_lvol_clone", 00:06:17.722 "bdev_lvol_snapshot", 00:06:17.722 "bdev_lvol_create", 00:06:17.722 "bdev_lvol_delete_lvstore", 00:06:17.722 "bdev_lvol_rename_lvstore", 00:06:17.722 "bdev_lvol_create_lvstore", 00:06:17.722 "bdev_raid_set_options", 00:06:17.722 "bdev_raid_remove_base_bdev", 00:06:17.722 "bdev_raid_add_base_bdev", 00:06:17.722 "bdev_raid_delete", 00:06:17.722 "bdev_raid_create", 00:06:17.722 "bdev_raid_get_bdevs", 00:06:17.722 "bdev_error_inject_error", 00:06:17.722 "bdev_error_delete", 00:06:17.722 "bdev_error_create", 00:06:17.722 "bdev_split_delete", 00:06:17.722 "bdev_split_create", 00:06:17.722 "bdev_delay_delete", 00:06:17.722 "bdev_delay_create", 00:06:17.722 "bdev_delay_update_latency", 00:06:17.722 "bdev_zone_block_delete", 00:06:17.722 "bdev_zone_block_create", 00:06:17.722 "blobfs_create", 00:06:17.722 "blobfs_detect", 00:06:17.722 "blobfs_set_cache_size", 00:06:17.722 "bdev_aio_delete", 00:06:17.722 "bdev_aio_rescan", 00:06:17.722 "bdev_aio_create", 00:06:17.722 "bdev_ftl_set_property", 00:06:17.722 "bdev_ftl_get_properties", 00:06:17.722 "bdev_ftl_get_stats", 00:06:17.722 "bdev_ftl_unmap", 00:06:17.722 "bdev_ftl_unload", 00:06:17.722 "bdev_ftl_delete", 00:06:17.722 "bdev_ftl_load", 00:06:17.722 "bdev_ftl_create", 00:06:17.722 "bdev_virtio_attach_controller", 00:06:17.722 "bdev_virtio_scsi_get_devices", 00:06:17.722 "bdev_virtio_detach_controller", 00:06:17.722 "bdev_virtio_blk_set_hotplug", 00:06:17.722 "bdev_iscsi_delete", 00:06:17.722 "bdev_iscsi_create", 00:06:17.722 "bdev_iscsi_set_options", 00:06:17.722 "accel_error_inject_error", 00:06:17.722 "ioat_scan_accel_module", 00:06:17.722 "dsa_scan_accel_module", 00:06:17.722 "iaa_scan_accel_module", 00:06:17.722 "keyring_file_remove_key", 00:06:17.722 "keyring_file_add_key", 00:06:17.722 "keyring_linux_set_options", 00:06:17.722 "fsdev_aio_delete", 00:06:17.722 "fsdev_aio_create", 00:06:17.722 "iscsi_get_histogram", 00:06:17.722 "iscsi_enable_histogram", 00:06:17.722 "iscsi_set_options", 00:06:17.722 "iscsi_get_auth_groups", 00:06:17.722 "iscsi_auth_group_remove_secret", 00:06:17.722 "iscsi_auth_group_add_secret", 00:06:17.722 "iscsi_delete_auth_group", 00:06:17.722 "iscsi_create_auth_group", 00:06:17.722 "iscsi_set_discovery_auth", 00:06:17.722 "iscsi_get_options", 00:06:17.722 "iscsi_target_node_request_logout", 00:06:17.722 "iscsi_target_node_set_redirect", 00:06:17.722 "iscsi_target_node_set_auth", 00:06:17.722 "iscsi_target_node_add_lun", 00:06:17.722 "iscsi_get_stats", 00:06:17.722 "iscsi_get_connections", 00:06:17.722 "iscsi_portal_group_set_auth", 00:06:17.722 "iscsi_start_portal_group", 00:06:17.722 "iscsi_delete_portal_group", 00:06:17.722 "iscsi_create_portal_group", 00:06:17.722 "iscsi_get_portal_groups", 00:06:17.722 "iscsi_delete_target_node", 00:06:17.722 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.722 "iscsi_target_node_add_pg_ig_maps", 00:06:17.722 "iscsi_create_target_node", 00:06:17.722 "iscsi_get_target_nodes", 00:06:17.722 "iscsi_delete_initiator_group", 00:06:17.722 "iscsi_initiator_group_remove_initiators", 00:06:17.722 "iscsi_initiator_group_add_initiators", 00:06:17.722 "iscsi_create_initiator_group", 00:06:17.722 "iscsi_get_initiator_groups", 00:06:17.722 "nvmf_set_crdt", 00:06:17.722 "nvmf_set_config", 00:06:17.722 "nvmf_set_max_subsystems", 00:06:17.722 "nvmf_stop_mdns_prr", 00:06:17.722 "nvmf_publish_mdns_prr", 00:06:17.722 "nvmf_subsystem_get_listeners", 00:06:17.722 "nvmf_subsystem_get_qpairs", 00:06:17.722 "nvmf_subsystem_get_controllers", 00:06:17.722 "nvmf_get_stats", 00:06:17.722 "nvmf_get_transports", 00:06:17.722 "nvmf_create_transport", 00:06:17.722 "nvmf_get_targets", 00:06:17.722 "nvmf_delete_target", 00:06:17.722 "nvmf_create_target", 00:06:17.722 "nvmf_subsystem_allow_any_host", 00:06:17.722 "nvmf_subsystem_set_keys", 00:06:17.722 "nvmf_subsystem_remove_host", 00:06:17.722 "nvmf_subsystem_add_host", 00:06:17.722 "nvmf_ns_remove_host", 00:06:17.722 "nvmf_ns_add_host", 00:06:17.722 "nvmf_subsystem_remove_ns", 00:06:17.722 "nvmf_subsystem_set_ns_ana_group", 00:06:17.723 "nvmf_subsystem_add_ns", 00:06:17.723 "nvmf_subsystem_listener_set_ana_state", 00:06:17.723 "nvmf_discovery_get_referrals", 00:06:17.723 "nvmf_discovery_remove_referral", 00:06:17.723 "nvmf_discovery_add_referral", 00:06:17.723 "nvmf_subsystem_remove_listener", 00:06:17.723 "nvmf_subsystem_add_listener", 00:06:17.723 "nvmf_delete_subsystem", 00:06:17.723 "nvmf_create_subsystem", 00:06:17.723 "nvmf_get_subsystems", 00:06:17.723 "env_dpdk_get_mem_stats", 00:06:17.723 "nbd_get_disks", 00:06:17.723 "nbd_stop_disk", 00:06:17.723 "nbd_start_disk", 00:06:17.723 "ublk_recover_disk", 00:06:17.723 "ublk_get_disks", 00:06:17.723 "ublk_stop_disk", 00:06:17.723 "ublk_start_disk", 00:06:17.723 "ublk_destroy_target", 00:06:17.723 "ublk_create_target", 00:06:17.723 "virtio_blk_create_transport", 00:06:17.723 "virtio_blk_get_transports", 00:06:17.723 "vhost_controller_set_coalescing", 00:06:17.723 "vhost_get_controllers", 00:06:17.723 "vhost_delete_controller", 00:06:17.723 "vhost_create_blk_controller", 00:06:17.723 "vhost_scsi_controller_remove_target", 00:06:17.723 "vhost_scsi_controller_add_target", 00:06:17.723 "vhost_start_scsi_controller", 00:06:17.723 "vhost_create_scsi_controller", 00:06:17.723 "thread_set_cpumask", 00:06:17.723 "scheduler_set_options", 00:06:17.723 "framework_get_governor", 00:06:17.723 "framework_get_scheduler", 00:06:17.723 "framework_set_scheduler", 00:06:17.723 "framework_get_reactors", 00:06:17.723 "thread_get_io_channels", 00:06:17.723 "thread_get_pollers", 00:06:17.723 "thread_get_stats", 00:06:17.723 "framework_monitor_context_switch", 00:06:17.723 "spdk_kill_instance", 00:06:17.723 "log_enable_timestamps", 00:06:17.723 "log_get_flags", 00:06:17.723 "log_clear_flag", 00:06:17.723 "log_set_flag", 00:06:17.723 "log_get_level", 00:06:17.723 "log_set_level", 00:06:17.723 "log_get_print_level", 00:06:17.723 "log_set_print_level", 00:06:17.723 "framework_enable_cpumask_locks", 00:06:17.723 "framework_disable_cpumask_locks", 00:06:17.723 "framework_wait_init", 00:06:17.723 "framework_start_init", 00:06:17.723 "scsi_get_devices", 00:06:17.723 "bdev_get_histogram", 00:06:17.723 "bdev_enable_histogram", 00:06:17.723 "bdev_set_qos_limit", 00:06:17.723 "bdev_set_qd_sampling_period", 00:06:17.723 "bdev_get_bdevs", 00:06:17.723 "bdev_reset_iostat", 00:06:17.723 "bdev_get_iostat", 00:06:17.723 "bdev_examine", 00:06:17.723 "bdev_wait_for_examine", 00:06:17.723 "bdev_set_options", 00:06:17.723 "accel_get_stats", 00:06:17.723 "accel_set_options", 00:06:17.723 "accel_set_driver", 00:06:17.723 "accel_crypto_key_destroy", 00:06:17.723 "accel_crypto_keys_get", 00:06:17.723 "accel_crypto_key_create", 00:06:17.723 "accel_assign_opc", 00:06:17.723 "accel_get_module_info", 00:06:17.723 "accel_get_opc_assignments", 00:06:17.723 "vmd_rescan", 00:06:17.723 "vmd_remove_device", 00:06:17.723 "vmd_enable", 00:06:17.723 "sock_get_default_impl", 00:06:17.723 "sock_set_default_impl", 00:06:17.723 "sock_impl_set_options", 00:06:17.723 "sock_impl_get_options", 00:06:17.723 "iobuf_get_stats", 00:06:17.723 "iobuf_set_options", 00:06:17.723 "keyring_get_keys", 00:06:17.723 "framework_get_pci_devices", 00:06:17.723 "framework_get_config", 00:06:17.723 "framework_get_subsystems", 00:06:17.723 "fsdev_set_opts", 00:06:17.723 "fsdev_get_opts", 00:06:17.723 "trace_get_info", 00:06:17.723 "trace_get_tpoint_group_mask", 00:06:17.723 "trace_disable_tpoint_group", 00:06:17.723 "trace_enable_tpoint_group", 00:06:17.723 "trace_clear_tpoint_mask", 00:06:17.723 "trace_set_tpoint_mask", 00:06:17.723 "notify_get_notifications", 00:06:17.723 "notify_get_types", 00:06:17.723 "spdk_get_version", 00:06:17.723 "rpc_get_methods" 00:06:17.723 ] 00:06:17.723 16:14:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.723 16:14:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.723 16:14:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57899 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57899 ']' 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57899 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57899 00:06:17.723 killing process with pid 57899 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57899' 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57899 00:06:17.723 16:14:10 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57899 00:06:20.253 ************************************ 00:06:20.253 END TEST spdkcli_tcp 00:06:20.253 ************************************ 00:06:20.253 00:06:20.253 real 0m4.561s 00:06:20.253 user 0m8.071s 00:06:20.253 sys 0m0.647s 00:06:20.253 16:14:13 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.253 16:14:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.511 16:14:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.511 16:14:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.511 16:14:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.511 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.511 ************************************ 00:06:20.511 START TEST dpdk_mem_utility 00:06:20.511 ************************************ 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.511 * Looking for test storage... 00:06:20.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:20.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.511 16:14:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.511 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.511 --rc genhtml_branch_coverage=1 00:06:20.512 --rc genhtml_function_coverage=1 00:06:20.512 --rc genhtml_legend=1 00:06:20.512 --rc geninfo_all_blocks=1 00:06:20.512 --rc geninfo_unexecuted_blocks=1 00:06:20.512 00:06:20.512 ' 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.512 --rc genhtml_branch_coverage=1 00:06:20.512 --rc genhtml_function_coverage=1 00:06:20.512 --rc genhtml_legend=1 00:06:20.512 --rc geninfo_all_blocks=1 00:06:20.512 --rc geninfo_unexecuted_blocks=1 00:06:20.512 00:06:20.512 ' 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.512 --rc genhtml_branch_coverage=1 00:06:20.512 --rc genhtml_function_coverage=1 00:06:20.512 --rc genhtml_legend=1 00:06:20.512 --rc geninfo_all_blocks=1 00:06:20.512 --rc geninfo_unexecuted_blocks=1 00:06:20.512 00:06:20.512 ' 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.512 --rc genhtml_branch_coverage=1 00:06:20.512 --rc genhtml_function_coverage=1 00:06:20.512 --rc genhtml_legend=1 00:06:20.512 --rc geninfo_all_blocks=1 00:06:20.512 --rc geninfo_unexecuted_blocks=1 00:06:20.512 00:06:20.512 ' 00:06:20.512 16:14:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:20.512 16:14:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58027 00:06:20.512 16:14:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58027 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58027 ']' 00:06:20.512 16:14:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.512 16:14:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.770 [2024-10-08 16:14:13.907598] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:20.770 [2024-10-08 16:14:13.908071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58027 ] 00:06:20.770 [2024-10-08 16:14:14.088183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.336 [2024-10-08 16:14:14.384145] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.312 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.312 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:22.312 16:14:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:22.312 16:14:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:22.312 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.312 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.312 { 00:06:22.312 "filename": "/tmp/spdk_mem_dump.txt" 00:06:22.312 } 00:06:22.312 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.312 16:14:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:22.312 DPDK memory size 866.000000 MiB in 1 heap(s) 00:06:22.312 1 heaps totaling size 866.000000 MiB 00:06:22.312 size: 866.000000 MiB heap id: 0 00:06:22.312 end heaps---------- 00:06:22.312 9 mempools totaling size 642.649841 MiB 00:06:22.312 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:22.312 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:22.312 size: 92.545471 MiB name: bdev_io_58027 00:06:22.312 size: 51.011292 MiB name: evtpool_58027 00:06:22.312 size: 50.003479 MiB name: msgpool_58027 00:06:22.312 size: 36.509338 MiB name: fsdev_io_58027 00:06:22.312 size: 21.763794 MiB name: PDU_Pool 00:06:22.312 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:22.312 size: 0.026123 MiB name: Session_Pool 00:06:22.312 end mempools------- 00:06:22.312 6 memzones totaling size 4.142822 MiB 00:06:22.312 size: 1.000366 MiB name: RG_ring_0_58027 00:06:22.312 size: 1.000366 MiB name: RG_ring_1_58027 00:06:22.312 size: 1.000366 MiB name: RG_ring_4_58027 00:06:22.312 size: 1.000366 MiB name: RG_ring_5_58027 00:06:22.312 size: 0.125366 MiB name: RG_ring_2_58027 00:06:22.312 size: 0.015991 MiB name: RG_ring_3_58027 00:06:22.312 end memzones------- 00:06:22.312 16:14:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:22.312 heap id: 0 total size: 866.000000 MiB number of busy elements: 304 number of free elements: 19 00:06:22.312 list of free elements. size: 19.916260 MiB 00:06:22.312 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:22.312 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:22.312 element at address: 0x200009600000 with size: 1.995972 MiB 00:06:22.312 element at address: 0x20000d800000 with size: 1.995972 MiB 00:06:22.312 element at address: 0x200007000000 with size: 1.991028 MiB 00:06:22.312 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:06:22.312 element at address: 0x20001c300040 with size: 0.999939 MiB 00:06:22.312 element at address: 0x20001c400000 with size: 0.999084 MiB 00:06:22.312 element at address: 0x200035000000 with size: 0.994324 MiB 00:06:22.312 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:06:22.312 element at address: 0x20001c700040 with size: 0.936401 MiB 00:06:22.312 element at address: 0x200000200000 with size: 0.831909 MiB 00:06:22.312 element at address: 0x20001de00000 with size: 0.562195 MiB 00:06:22.312 element at address: 0x200003e00000 with size: 0.491882 MiB 00:06:22.312 element at address: 0x20001c000000 with size: 0.489197 MiB 00:06:22.312 element at address: 0x20001c800000 with size: 0.485413 MiB 00:06:22.312 element at address: 0x200015e00000 with size: 0.443481 MiB 00:06:22.312 element at address: 0x20002b200000 with size: 0.390442 MiB 00:06:22.312 element at address: 0x200003a00000 with size: 0.353088 MiB 00:06:22.312 list of standard malloc elements. size: 199.285034 MiB 00:06:22.312 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:06:22.312 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:06:22.312 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:06:22.312 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:06:22.312 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:06:22.312 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:22.312 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:06:22.312 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:22.312 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:06:22.312 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:06:22.312 element at address: 0x200015dff040 with size: 0.000305 MiB 00:06:22.312 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:22.312 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:22.312 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003aff800 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003efef00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff180 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff280 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff380 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff480 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff580 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff680 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff780 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff880 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dff980 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71880 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71980 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e72080 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015e72180 with size: 0.000244 MiB 00:06:22.313 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:06:22.313 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b264040 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:06:22.314 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:06:22.314 list of memzone associated elements. size: 646.798706 MiB 00:06:22.314 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:06:22.314 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:22.314 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:06:22.314 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:22.314 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:06:22.314 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58027_0 00:06:22.314 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:22.314 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58027_0 00:06:22.314 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:22.314 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58027_0 00:06:22.314 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:06:22.314 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58027_0 00:06:22.314 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:06:22.314 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:22.314 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:06:22.314 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:22.314 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:22.314 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58027 00:06:22.314 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:22.314 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58027 00:06:22.314 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:22.314 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58027 00:06:22.314 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:06:22.314 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:22.314 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:06:22.314 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:22.314 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:06:22.314 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:22.314 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:06:22.314 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:22.314 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:22.314 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58027 00:06:22.314 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:22.314 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58027 00:06:22.314 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:06:22.314 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58027 00:06:22.314 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:06:22.314 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58027 00:06:22.314 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:06:22.314 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58027 00:06:22.314 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:06:22.314 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58027 00:06:22.314 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:06:22.314 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:22.314 element at address: 0x200015e72280 with size: 0.500549 MiB 00:06:22.314 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:22.314 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:06:22.314 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:22.314 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:06:22.314 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58027 00:06:22.314 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:06:22.314 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:22.314 element at address: 0x20002b264140 with size: 0.023804 MiB 00:06:22.314 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:22.314 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:06:22.314 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58027 00:06:22.314 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:06:22.314 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:22.314 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:06:22.314 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58027 00:06:22.314 element at address: 0x200003aff900 with size: 0.000366 MiB 00:06:22.314 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58027 00:06:22.314 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:06:22.314 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58027 00:06:22.315 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:06:22.315 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:22.315 16:14:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:22.315 16:14:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58027 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58027 ']' 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58027 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58027 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.315 killing process with pid 58027 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58027' 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58027 00:06:22.315 16:14:15 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58027 00:06:24.843 ************************************ 00:06:24.844 END TEST dpdk_mem_utility 00:06:24.844 ************************************ 00:06:24.844 00:06:24.844 real 0m4.523s 00:06:24.844 user 0m4.407s 00:06:24.844 sys 0m0.748s 00:06:24.844 16:14:18 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.844 16:14:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.844 16:14:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:24.844 16:14:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.844 16:14:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.844 16:14:18 -- common/autotest_common.sh@10 -- # set +x 00:06:25.102 ************************************ 00:06:25.102 START TEST event 00:06:25.102 ************************************ 00:06:25.102 16:14:18 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:25.102 * Looking for test storage... 00:06:25.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.102 16:14:18 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.102 16:14:18 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.102 16:14:18 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.102 16:14:18 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.102 16:14:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.102 16:14:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.102 16:14:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.102 16:14:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.102 16:14:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.102 16:14:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.102 16:14:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.102 16:14:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.102 16:14:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.102 16:14:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.102 16:14:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.102 16:14:18 event -- scripts/common.sh@344 -- # case "$op" in 00:06:25.103 16:14:18 event -- scripts/common.sh@345 -- # : 1 00:06:25.103 16:14:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.103 16:14:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.103 16:14:18 event -- scripts/common.sh@365 -- # decimal 1 00:06:25.103 16:14:18 event -- scripts/common.sh@353 -- # local d=1 00:06:25.103 16:14:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.103 16:14:18 event -- scripts/common.sh@355 -- # echo 1 00:06:25.103 16:14:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.103 16:14:18 event -- scripts/common.sh@366 -- # decimal 2 00:06:25.103 16:14:18 event -- scripts/common.sh@353 -- # local d=2 00:06:25.103 16:14:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.103 16:14:18 event -- scripts/common.sh@355 -- # echo 2 00:06:25.103 16:14:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.103 16:14:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.103 16:14:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.103 16:14:18 event -- scripts/common.sh@368 -- # return 0 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.103 --rc genhtml_branch_coverage=1 00:06:25.103 --rc genhtml_function_coverage=1 00:06:25.103 --rc genhtml_legend=1 00:06:25.103 --rc geninfo_all_blocks=1 00:06:25.103 --rc geninfo_unexecuted_blocks=1 00:06:25.103 00:06:25.103 ' 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.103 --rc genhtml_branch_coverage=1 00:06:25.103 --rc genhtml_function_coverage=1 00:06:25.103 --rc genhtml_legend=1 00:06:25.103 --rc geninfo_all_blocks=1 00:06:25.103 --rc geninfo_unexecuted_blocks=1 00:06:25.103 00:06:25.103 ' 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.103 --rc genhtml_branch_coverage=1 00:06:25.103 --rc genhtml_function_coverage=1 00:06:25.103 --rc genhtml_legend=1 00:06:25.103 --rc geninfo_all_blocks=1 00:06:25.103 --rc geninfo_unexecuted_blocks=1 00:06:25.103 00:06:25.103 ' 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.103 --rc genhtml_branch_coverage=1 00:06:25.103 --rc genhtml_function_coverage=1 00:06:25.103 --rc genhtml_legend=1 00:06:25.103 --rc geninfo_all_blocks=1 00:06:25.103 --rc geninfo_unexecuted_blocks=1 00:06:25.103 00:06:25.103 ' 00:06:25.103 16:14:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:25.103 16:14:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.103 16:14:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:25.103 16:14:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.103 16:14:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.103 ************************************ 00:06:25.103 START TEST event_perf 00:06:25.103 ************************************ 00:06:25.103 16:14:18 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.360 Running I/O for 1 seconds...[2024-10-08 16:14:18.452312] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:25.360 [2024-10-08 16:14:18.452679] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58140 ] 00:06:25.360 [2024-10-08 16:14:18.630647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.925 [2024-10-08 16:14:18.945034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.925 [2024-10-08 16:14:18.945206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.925 [2024-10-08 16:14:18.945350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.925 [2024-10-08 16:14:18.945384] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.299 Running I/O for 1 seconds... 00:06:27.299 lcore 0: 118060 00:06:27.299 lcore 1: 118059 00:06:27.299 lcore 2: 118060 00:06:27.299 lcore 3: 118060 00:06:27.299 done. 00:06:27.299 00:06:27.299 real 0m2.020s 00:06:27.299 user 0m4.716s 00:06:27.299 sys 0m0.165s 00:06:27.299 16:14:20 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.299 16:14:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 ************************************ 00:06:27.299 END TEST event_perf 00:06:27.299 ************************************ 00:06:27.299 16:14:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:27.299 16:14:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:27.299 16:14:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.299 16:14:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 ************************************ 00:06:27.299 START TEST event_reactor 00:06:27.299 ************************************ 00:06:27.299 16:14:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:27.299 [2024-10-08 16:14:20.535614] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:27.299 [2024-10-08 16:14:20.536169] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58185 ] 00:06:27.558 [2024-10-08 16:14:20.724709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.816 [2024-10-08 16:14:21.022064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.192 test_start 00:06:29.192 oneshot 00:06:29.192 tick 100 00:06:29.192 tick 100 00:06:29.192 tick 250 00:06:29.192 tick 100 00:06:29.192 tick 100 00:06:29.192 tick 100 00:06:29.192 tick 250 00:06:29.192 tick 500 00:06:29.192 tick 100 00:06:29.192 tick 100 00:06:29.192 tick 250 00:06:29.192 tick 100 00:06:29.192 tick 100 00:06:29.192 test_end 00:06:29.192 00:06:29.192 real 0m1.996s 00:06:29.192 user 0m1.735s 00:06:29.192 sys 0m0.146s 00:06:29.192 16:14:22 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.192 ************************************ 00:06:29.192 END TEST event_reactor 00:06:29.192 ************************************ 00:06:29.192 16:14:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:29.450 16:14:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.450 16:14:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:29.450 16:14:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.450 16:14:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.450 ************************************ 00:06:29.450 START TEST event_reactor_perf 00:06:29.450 ************************************ 00:06:29.450 16:14:22 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.450 [2024-10-08 16:14:22.582850] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:29.450 [2024-10-08 16:14:22.583342] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58227 ] 00:06:29.450 [2024-10-08 16:14:22.765555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.018 [2024-10-08 16:14:23.041302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.453 test_start 00:06:31.453 test_end 00:06:31.453 Performance: 266749 events per second 00:06:31.453 00:06:31.453 real 0m1.960s 00:06:31.453 user 0m1.703s 00:06:31.453 sys 0m0.142s 00:06:31.453 16:14:24 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.453 ************************************ 00:06:31.453 END TEST event_reactor_perf 00:06:31.453 ************************************ 00:06:31.453 16:14:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 16:14:24 event -- event/event.sh@49 -- # uname -s 00:06:31.453 16:14:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:31.453 16:14:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:31.453 16:14:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.453 16:14:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.453 16:14:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 ************************************ 00:06:31.453 START TEST event_scheduler 00:06:31.453 ************************************ 00:06:31.453 16:14:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:31.453 * Looking for test storage... 00:06:31.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:31.453 16:14:24 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.453 16:14:24 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.453 16:14:24 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.453 16:14:24 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.453 16:14:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.453 16:14:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.453 16:14:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.453 16:14:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.453 16:14:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.453 16:14:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.454 16:14:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.454 --rc genhtml_branch_coverage=1 00:06:31.454 --rc genhtml_function_coverage=1 00:06:31.454 --rc genhtml_legend=1 00:06:31.454 --rc geninfo_all_blocks=1 00:06:31.454 --rc geninfo_unexecuted_blocks=1 00:06:31.454 00:06:31.454 ' 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.454 --rc genhtml_branch_coverage=1 00:06:31.454 --rc genhtml_function_coverage=1 00:06:31.454 --rc genhtml_legend=1 00:06:31.454 --rc geninfo_all_blocks=1 00:06:31.454 --rc geninfo_unexecuted_blocks=1 00:06:31.454 00:06:31.454 ' 00:06:31.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.454 --rc genhtml_branch_coverage=1 00:06:31.454 --rc genhtml_function_coverage=1 00:06:31.454 --rc genhtml_legend=1 00:06:31.454 --rc geninfo_all_blocks=1 00:06:31.454 --rc geninfo_unexecuted_blocks=1 00:06:31.454 00:06:31.454 ' 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.454 --rc genhtml_branch_coverage=1 00:06:31.454 --rc genhtml_function_coverage=1 00:06:31.454 --rc genhtml_legend=1 00:06:31.454 --rc geninfo_all_blocks=1 00:06:31.454 --rc geninfo_unexecuted_blocks=1 00:06:31.454 00:06:31.454 ' 00:06:31.454 16:14:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:31.454 16:14:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58303 00:06:31.454 16:14:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.454 16:14:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:31.454 16:14:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58303 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58303 ']' 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.454 16:14:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.712 [2024-10-08 16:14:24.879421] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:31.712 [2024-10-08 16:14:24.880681] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58303 ] 00:06:31.971 [2024-10-08 16:14:25.063259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.232 [2024-10-08 16:14:25.330317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.232 [2024-10-08 16:14:25.330507] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.232 [2024-10-08 16:14:25.330607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.232 [2024-10-08 16:14:25.330608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:32.800 16:14:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.800 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:32.800 POWER: Cannot set governor of lcore 0 to userspace 00:06:32.800 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:32.800 POWER: Cannot set governor of lcore 0 to performance 00:06:32.800 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:32.800 POWER: Cannot set governor of lcore 0 to userspace 00:06:32.800 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:32.800 POWER: Cannot set governor of lcore 0 to userspace 00:06:32.800 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:32.800 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:32.800 POWER: Unable to set Power Management Environment for lcore 0 00:06:32.800 [2024-10-08 16:14:25.962572] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:32.800 [2024-10-08 16:14:25.962606] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:32.800 [2024-10-08 16:14:25.962623] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:32.800 [2024-10-08 16:14:25.962683] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:32.800 [2024-10-08 16:14:25.962704] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:32.800 [2024-10-08 16:14:25.962719] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.800 16:14:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.800 16:14:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 [2024-10-08 16:14:26.289597] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:33.059 16:14:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:33.059 16:14:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.059 16:14:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 ************************************ 00:06:33.059 START TEST scheduler_create_thread 00:06:33.059 ************************************ 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 2 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 3 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 4 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 5 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 6 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 7 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 8 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 9 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.059 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.317 10 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.317 16:14:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.692 16:14:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.692 16:14:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:34.692 16:14:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:34.692 16:14:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.692 16:14:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.642 16:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.642 00:06:35.642 real 0m2.617s 00:06:35.642 user 0m0.014s 00:06:35.642 sys 0m0.007s 00:06:35.642 ************************************ 00:06:35.642 END TEST scheduler_create_thread 00:06:35.642 ************************************ 00:06:35.642 16:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.642 16:14:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.909 16:14:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.909 16:14:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58303 00:06:35.909 16:14:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58303 ']' 00:06:35.909 16:14:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58303 00:06:35.909 16:14:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:35.909 16:14:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.909 16:14:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58303 00:06:35.909 killing process with pid 58303 00:06:35.909 16:14:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:35.909 16:14:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:35.909 16:14:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58303' 00:06:35.909 16:14:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58303 00:06:35.909 16:14:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58303 00:06:36.167 [2024-10-08 16:14:29.398881] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.539 ************************************ 00:06:37.539 END TEST event_scheduler 00:06:37.539 ************************************ 00:06:37.539 00:06:37.539 real 0m6.112s 00:06:37.539 user 0m10.444s 00:06:37.539 sys 0m0.574s 00:06:37.539 16:14:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.539 16:14:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.539 16:14:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.539 16:14:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.539 16:14:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.539 16:14:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.539 16:14:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.539 ************************************ 00:06:37.539 START TEST app_repeat 00:06:37.539 ************************************ 00:06:37.539 16:14:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.539 Process app_repeat pid: 58420 00:06:37.539 spdk_app_start Round 0 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58420 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.539 16:14:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.540 16:14:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58420' 00:06:37.540 16:14:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.540 16:14:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.540 16:14:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58420 /var/tmp/spdk-nbd.sock 00:06:37.540 16:14:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58420 ']' 00:06:37.540 16:14:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.540 16:14:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.540 16:14:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.540 16:14:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.540 16:14:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.540 [2024-10-08 16:14:30.792109] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:37.540 [2024-10-08 16:14:30.792589] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58420 ] 00:06:37.797 [2024-10-08 16:14:30.960795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.055 [2024-10-08 16:14:31.237241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.055 [2024-10-08 16:14:31.237252] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.621 16:14:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.621 16:14:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:38.621 16:14:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.187 Malloc0 00:06:39.187 16:14:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.446 Malloc1 00:06:39.446 16:14:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.446 16:14:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.704 /dev/nbd0 00:06:39.704 16:14:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.704 16:14:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.704 1+0 records in 00:06:39.704 1+0 records out 00:06:39.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428696 s, 9.6 MB/s 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.704 16:14:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.704 16:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.704 16:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.704 16:14:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.270 /dev/nbd1 00:06:40.270 16:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.270 16:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:40.270 16:14:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:40.271 16:14:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.271 1+0 records in 00:06:40.271 1+0 records out 00:06:40.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514478 s, 8.0 MB/s 00:06:40.271 16:14:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.271 16:14:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:40.271 16:14:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.271 16:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:40.271 16:14:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:40.271 16:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.271 16:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.271 16:14:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.271 16:14:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.271 16:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.529 { 00:06:40.529 "nbd_device": "/dev/nbd0", 00:06:40.529 "bdev_name": "Malloc0" 00:06:40.529 }, 00:06:40.529 { 00:06:40.529 "nbd_device": "/dev/nbd1", 00:06:40.529 "bdev_name": "Malloc1" 00:06:40.529 } 00:06:40.529 ]' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.529 { 00:06:40.529 "nbd_device": "/dev/nbd0", 00:06:40.529 "bdev_name": "Malloc0" 00:06:40.529 }, 00:06:40.529 { 00:06:40.529 "nbd_device": "/dev/nbd1", 00:06:40.529 "bdev_name": "Malloc1" 00:06:40.529 } 00:06:40.529 ]' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.529 /dev/nbd1' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.529 /dev/nbd1' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.529 16:14:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.530 256+0 records in 00:06:40.530 256+0 records out 00:06:40.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00801909 s, 131 MB/s 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.530 256+0 records in 00:06:40.530 256+0 records out 00:06:40.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308958 s, 33.9 MB/s 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.530 256+0 records in 00:06:40.530 256+0 records out 00:06:40.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360077 s, 29.1 MB/s 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.530 16:14:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.788 16:14:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.047 16:14:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.305 16:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.305 16:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.305 16:14:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.305 16:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.305 16:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.306 16:14:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.306 16:14:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.306 16:14:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.306 16:14:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.306 16:14:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.306 16:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.564 16:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.564 16:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.564 16:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.822 16:14:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.822 16:14:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.389 16:14:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.807 [2024-10-08 16:14:36.765227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.807 [2024-10-08 16:14:37.032983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.807 [2024-10-08 16:14:37.032993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.065 [2024-10-08 16:14:37.243335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.065 [2024-10-08 16:14:37.243489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.438 spdk_app_start Round 1 00:06:45.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.438 16:14:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.438 16:14:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.438 16:14:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58420 /var/tmp/spdk-nbd.sock 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58420 ']' 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.438 16:14:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.438 16:14:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.006 Malloc0 00:06:46.006 16:14:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.265 Malloc1 00:06:46.265 16:14:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.265 16:14:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.524 /dev/nbd0 00:06:46.524 16:14:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.524 16:14:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.524 1+0 records in 00:06:46.524 1+0 records out 00:06:46.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375169 s, 10.9 MB/s 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.524 16:14:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.524 16:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.524 16:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.524 16:14:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.783 /dev/nbd1 00:06:46.783 16:14:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.783 16:14:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.783 16:14:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.042 1+0 records in 00:06:47.042 1+0 records out 00:06:47.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428533 s, 9.6 MB/s 00:06:47.042 16:14:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.042 16:14:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:47.042 16:14:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.042 16:14:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.042 16:14:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:47.042 16:14:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.042 16:14:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.042 16:14:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.042 16:14:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.042 16:14:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.300 16:14:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.300 { 00:06:47.300 "nbd_device": "/dev/nbd0", 00:06:47.300 "bdev_name": "Malloc0" 00:06:47.300 }, 00:06:47.300 { 00:06:47.300 "nbd_device": "/dev/nbd1", 00:06:47.300 "bdev_name": "Malloc1" 00:06:47.300 } 00:06:47.300 ]' 00:06:47.300 16:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.300 { 00:06:47.300 "nbd_device": "/dev/nbd0", 00:06:47.300 "bdev_name": "Malloc0" 00:06:47.300 }, 00:06:47.300 { 00:06:47.300 "nbd_device": "/dev/nbd1", 00:06:47.301 "bdev_name": "Malloc1" 00:06:47.301 } 00:06:47.301 ]' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.301 /dev/nbd1' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.301 /dev/nbd1' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.301 256+0 records in 00:06:47.301 256+0 records out 00:06:47.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00806609 s, 130 MB/s 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.301 256+0 records in 00:06:47.301 256+0 records out 00:06:47.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300788 s, 34.9 MB/s 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.301 256+0 records in 00:06:47.301 256+0 records out 00:06:47.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304932 s, 34.4 MB/s 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.301 16:14:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.561 16:14:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.561 16:14:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.561 16:14:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.561 16:14:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.562 16:14:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.562 16:14:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.562 16:14:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.562 16:14:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.562 16:14:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.562 16:14:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.827 16:14:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.086 16:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.344 16:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.344 16:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.344 16:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.344 16:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.344 16:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.344 16:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.345 16:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.345 16:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.345 16:14:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.345 16:14:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.345 16:14:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.345 16:14:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.345 16:14:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.910 16:14:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.284 [2024-10-08 16:14:43.480032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.543 [2024-10-08 16:14:43.748164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.543 [2024-10-08 16:14:43.748164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.801 [2024-10-08 16:14:43.960088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.801 [2024-10-08 16:14:43.960219] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.175 spdk_app_start Round 2 00:06:52.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.175 16:14:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.175 16:14:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:52.175 16:14:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58420 /var/tmp/spdk-nbd.sock 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58420 ']' 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.175 16:14:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:52.175 16:14:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.433 Malloc0 00:06:52.433 16:14:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.999 Malloc1 00:06:52.999 16:14:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.999 16:14:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.257 /dev/nbd0 00:06:53.257 16:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.257 16:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.257 1+0 records in 00:06:53.257 1+0 records out 00:06:53.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317631 s, 12.9 MB/s 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.257 16:14:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:53.257 16:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.257 16:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.257 16:14:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.514 /dev/nbd1 00:06:53.514 16:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.514 16:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.514 1+0 records in 00:06:53.514 1+0 records out 00:06:53.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353755 s, 11.6 MB/s 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.514 16:14:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:53.514 16:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.514 16:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.514 16:14:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.514 16:14:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.515 16:14:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.773 { 00:06:53.773 "nbd_device": "/dev/nbd0", 00:06:53.773 "bdev_name": "Malloc0" 00:06:53.773 }, 00:06:53.773 { 00:06:53.773 "nbd_device": "/dev/nbd1", 00:06:53.773 "bdev_name": "Malloc1" 00:06:53.773 } 00:06:53.773 ]' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.773 { 00:06:53.773 "nbd_device": "/dev/nbd0", 00:06:53.773 "bdev_name": "Malloc0" 00:06:53.773 }, 00:06:53.773 { 00:06:53.773 "nbd_device": "/dev/nbd1", 00:06:53.773 "bdev_name": "Malloc1" 00:06:53.773 } 00:06:53.773 ]' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.773 /dev/nbd1' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.773 /dev/nbd1' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.773 256+0 records in 00:06:53.773 256+0 records out 00:06:53.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00756933 s, 139 MB/s 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.773 16:14:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.031 256+0 records in 00:06:54.031 256+0 records out 00:06:54.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235154 s, 44.6 MB/s 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.031 256+0 records in 00:06:54.031 256+0 records out 00:06:54.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339952 s, 30.8 MB/s 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.031 16:14:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.289 16:14:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.549 16:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.808 16:14:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.808 16:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.808 16:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.066 16:14:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.066 16:14:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.632 16:14:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.006 [2024-10-08 16:14:50.022131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.006 [2024-10-08 16:14:50.300044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.006 [2024-10-08 16:14:50.300058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.264 [2024-10-08 16:14:50.514098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.264 [2024-10-08 16:14:50.514188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.638 16:14:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58420 /var/tmp/spdk-nbd.sock 00:06:58.638 16:14:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58420 ']' 00:06:58.638 16:14:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.638 16:14:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.638 16:14:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.638 16:14:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.638 16:14:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:58.897 16:14:51 event.app_repeat -- event/event.sh@39 -- # killprocess 58420 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58420 ']' 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58420 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.897 16:14:51 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58420 00:06:58.897 killing process with pid 58420 00:06:58.897 16:14:52 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.897 16:14:52 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.897 16:14:52 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58420' 00:06:58.897 16:14:52 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58420 00:06:58.897 16:14:52 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58420 00:07:00.272 spdk_app_start is called in Round 0. 00:07:00.272 Shutdown signal received, stop current app iteration 00:07:00.272 Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 reinitialization... 00:07:00.272 spdk_app_start is called in Round 1. 00:07:00.272 Shutdown signal received, stop current app iteration 00:07:00.272 Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 reinitialization... 00:07:00.272 spdk_app_start is called in Round 2. 00:07:00.272 Shutdown signal received, stop current app iteration 00:07:00.272 Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 reinitialization... 00:07:00.272 spdk_app_start is called in Round 3. 00:07:00.272 Shutdown signal received, stop current app iteration 00:07:00.272 16:14:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:00.272 16:14:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:00.272 00:07:00.272 real 0m22.506s 00:07:00.272 user 0m48.329s 00:07:00.272 sys 0m3.504s 00:07:00.272 16:14:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.272 16:14:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.272 ************************************ 00:07:00.272 END TEST app_repeat 00:07:00.272 ************************************ 00:07:00.272 16:14:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:00.272 16:14:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:00.272 16:14:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.272 16:14:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.272 16:14:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.272 ************************************ 00:07:00.272 START TEST cpu_locks 00:07:00.272 ************************************ 00:07:00.272 16:14:53 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:00.272 * Looking for test storage... 00:07:00.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:00.272 16:14:53 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.272 16:14:53 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.272 16:14:53 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.272 16:14:53 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.272 16:14:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.273 16:14:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.273 --rc genhtml_branch_coverage=1 00:07:00.273 --rc genhtml_function_coverage=1 00:07:00.273 --rc genhtml_legend=1 00:07:00.273 --rc geninfo_all_blocks=1 00:07:00.273 --rc geninfo_unexecuted_blocks=1 00:07:00.273 00:07:00.273 ' 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.273 --rc genhtml_branch_coverage=1 00:07:00.273 --rc genhtml_function_coverage=1 00:07:00.273 --rc genhtml_legend=1 00:07:00.273 --rc geninfo_all_blocks=1 00:07:00.273 --rc geninfo_unexecuted_blocks=1 00:07:00.273 00:07:00.273 ' 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.273 --rc genhtml_branch_coverage=1 00:07:00.273 --rc genhtml_function_coverage=1 00:07:00.273 --rc genhtml_legend=1 00:07:00.273 --rc geninfo_all_blocks=1 00:07:00.273 --rc geninfo_unexecuted_blocks=1 00:07:00.273 00:07:00.273 ' 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.273 --rc genhtml_branch_coverage=1 00:07:00.273 --rc genhtml_function_coverage=1 00:07:00.273 --rc genhtml_legend=1 00:07:00.273 --rc geninfo_all_blocks=1 00:07:00.273 --rc geninfo_unexecuted_blocks=1 00:07:00.273 00:07:00.273 ' 00:07:00.273 16:14:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:00.273 16:14:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:00.273 16:14:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:00.273 16:14:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.273 16:14:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.273 ************************************ 00:07:00.273 START TEST default_locks 00:07:00.273 ************************************ 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58897 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58897 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58897 ']' 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.273 16:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.531 [2024-10-08 16:14:53.620666] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:00.531 [2024-10-08 16:14:53.620875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58897 ] 00:07:00.531 [2024-10-08 16:14:53.800939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.097 [2024-10-08 16:14:54.131287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.085 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.085 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:02.085 16:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58897 00:07:02.085 16:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58897 00:07:02.085 16:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58897 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58897 ']' 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58897 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58897 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.343 killing process with pid 58897 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58897' 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58897 00:07:02.343 16:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58897 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58897 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58897 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58897 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58897 ']' 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.629 ERROR: process (pid: 58897) is no longer running 00:07:05.629 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58897) - No such process 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.629 00:07:05.629 real 0m4.733s 00:07:05.629 user 0m4.614s 00:07:05.629 sys 0m0.883s 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.629 ************************************ 00:07:05.629 END TEST default_locks 00:07:05.629 ************************************ 00:07:05.629 16:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.629 16:14:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:05.629 16:14:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.629 16:14:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.629 16:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.629 ************************************ 00:07:05.629 START TEST default_locks_via_rpc 00:07:05.629 ************************************ 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58985 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58985 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58985 ']' 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.629 16:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.629 [2024-10-08 16:14:58.396587] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:05.629 [2024-10-08 16:14:58.397302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:07:05.629 [2024-10-08 16:14:58.566130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.629 [2024-10-08 16:14:58.841175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58985 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58985 00:07:06.570 16:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58985 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58985 ']' 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58985 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58985 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.137 killing process with pid 58985 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58985' 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58985 00:07:07.137 16:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58985 00:07:10.423 00:07:10.423 real 0m4.738s 00:07:10.423 user 0m4.642s 00:07:10.423 sys 0m0.893s 00:07:10.423 16:15:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.423 ************************************ 00:07:10.423 END TEST default_locks_via_rpc 00:07:10.423 ************************************ 00:07:10.423 16:15:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.423 16:15:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:10.423 16:15:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.423 16:15:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.423 16:15:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.423 ************************************ 00:07:10.423 START TEST non_locking_app_on_locked_coremask 00:07:10.423 ************************************ 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59070 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59070 /var/tmp/spdk.sock 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59070 ']' 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.423 16:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.423 [2024-10-08 16:15:03.189500] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:10.423 [2024-10-08 16:15:03.190579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:07:10.423 [2024-10-08 16:15:03.371623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.423 [2024-10-08 16:15:03.650927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59086 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59086 /var/tmp/spdk2.sock 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59086 ']' 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.358 16:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.616 [2024-10-08 16:15:04.762033] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:11.616 [2024-10-08 16:15:04.763057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59086 ] 00:07:11.874 [2024-10-08 16:15:04.943485] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.874 [2024-10-08 16:15:04.947612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.445 [2024-10-08 16:15:05.503352] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.361 16:15:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.361 16:15:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.361 16:15:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59070 00:07:14.361 16:15:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59070 00:07:14.361 16:15:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59070 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59070 ']' 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59070 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59070 00:07:15.297 killing process with pid 59070 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59070' 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59070 00:07:15.297 16:15:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59070 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59086 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59086 ']' 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59086 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59086 00:07:20.628 killing process with pid 59086 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59086' 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59086 00:07:20.628 16:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59086 00:07:23.910 00:07:23.910 real 0m13.554s 00:07:23.910 user 0m13.871s 00:07:23.910 sys 0m1.893s 00:07:23.910 16:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.910 16:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.910 ************************************ 00:07:23.910 END TEST non_locking_app_on_locked_coremask 00:07:23.910 ************************************ 00:07:23.910 16:15:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:23.910 16:15:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.910 16:15:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.910 16:15:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.910 ************************************ 00:07:23.910 START TEST locking_app_on_unlocked_coremask 00:07:23.910 ************************************ 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:23.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59257 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59257 /var/tmp/spdk.sock 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59257 ']' 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.910 16:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.910 [2024-10-08 16:15:16.799356] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:23.911 [2024-10-08 16:15:16.799891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:07:23.911 [2024-10-08 16:15:16.978791] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.911 [2024-10-08 16:15:16.979136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.911 [2024-10-08 16:15:17.228046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59279 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59279 /var/tmp/spdk2.sock 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59279 ']' 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.846 16:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.106 [2024-10-08 16:15:18.276251] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:25.106 [2024-10-08 16:15:18.277101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:07:25.365 [2024-10-08 16:15:18.473885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.932 [2024-10-08 16:15:18.956850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.831 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.831 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:27.831 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59279 00:07:27.831 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.831 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59279 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59257 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59257 ']' 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59257 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59257 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.766 killing process with pid 59257 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59257' 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59257 00:07:28.766 16:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59257 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59279 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59279 ']' 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59279 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59279 00:07:34.056 killing process with pid 59279 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59279' 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59279 00:07:34.056 16:15:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59279 00:07:35.972 ************************************ 00:07:35.972 END TEST locking_app_on_unlocked_coremask 00:07:35.972 ************************************ 00:07:35.972 00:07:35.972 real 0m12.466s 00:07:35.972 user 0m12.966s 00:07:35.972 sys 0m1.638s 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.972 16:15:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:35.972 16:15:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.972 16:15:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.972 16:15:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.972 ************************************ 00:07:35.972 START TEST locking_app_on_locked_coremask 00:07:35.972 ************************************ 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:35.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59429 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59429 /var/tmp/spdk.sock 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59429 ']' 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.972 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.973 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.973 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.973 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.973 16:15:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.230 [2024-10-08 16:15:29.304325] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:36.230 [2024-10-08 16:15:29.304508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:07:36.230 [2024-10-08 16:15:29.470919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.489 [2024-10-08 16:15:29.708824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59456 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59456 /var/tmp/spdk2.sock 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59456 /var/tmp/spdk2.sock 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59456 /var/tmp/spdk2.sock 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59456 ']' 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.425 16:15:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.425 [2024-10-08 16:15:30.741828] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:37.425 [2024-10-08 16:15:30.742429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59456 ] 00:07:37.683 [2024-10-08 16:15:30.938935] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59429 has claimed it. 00:07:37.683 [2024-10-08 16:15:30.939046] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.251 ERROR: process (pid: 59456) is no longer running 00:07:38.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59456) - No such process 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59429 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59429 00:07:38.251 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59429 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59429 ']' 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59429 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59429 00:07:38.818 killing process with pid 59429 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59429' 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59429 00:07:38.818 16:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59429 00:07:41.348 00:07:41.348 real 0m5.479s 00:07:41.348 user 0m5.913s 00:07:41.348 sys 0m1.028s 00:07:41.348 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.348 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.348 ************************************ 00:07:41.348 END TEST locking_app_on_locked_coremask 00:07:41.348 ************************************ 00:07:41.605 16:15:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:41.605 16:15:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.605 16:15:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.605 16:15:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.605 ************************************ 00:07:41.605 START TEST locking_overlapped_coremask 00:07:41.605 ************************************ 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59531 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59531 /var/tmp/spdk.sock 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59531 ']' 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.605 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.605 [2024-10-08 16:15:34.857745] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:41.605 [2024-10-08 16:15:34.858202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59531 ] 00:07:41.862 [2024-10-08 16:15:35.042428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.121 [2024-10-08 16:15:35.332456] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.121 [2024-10-08 16:15:35.332627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.121 [2024-10-08 16:15:35.332649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59549 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59549 /var/tmp/spdk2.sock 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59549 /var/tmp/spdk2.sock 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59549 /var/tmp/spdk2.sock 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59549 ']' 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.113 16:15:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.371 [2024-10-08 16:15:36.456319] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:43.371 [2024-10-08 16:15:36.457475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59549 ] 00:07:43.371 [2024-10-08 16:15:36.659906] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59531 has claimed it. 00:07:43.371 [2024-10-08 16:15:36.660023] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:43.937 ERROR: process (pid: 59549) is no longer running 00:07:43.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59549) - No such process 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59531 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59531 ']' 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59531 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59531 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59531' 00:07:43.937 killing process with pid 59531 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59531 00:07:43.937 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59531 00:07:47.217 00:07:47.217 real 0m5.111s 00:07:47.217 user 0m13.151s 00:07:47.217 sys 0m0.882s 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 ************************************ 00:07:47.217 END TEST locking_overlapped_coremask 00:07:47.217 ************************************ 00:07:47.217 16:15:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:47.217 16:15:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.217 16:15:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.217 16:15:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 ************************************ 00:07:47.217 START TEST locking_overlapped_coremask_via_rpc 00:07:47.217 ************************************ 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:47.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59624 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59624 /var/tmp/spdk.sock 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59624 ']' 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.217 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 [2024-10-08 16:15:40.007663] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:47.217 [2024-10-08 16:15:40.008084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59624 ] 00:07:47.217 [2024-10-08 16:15:40.186919] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.217 [2024-10-08 16:15:40.186986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.217 [2024-10-08 16:15:40.473250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.217 [2024-10-08 16:15:40.473369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.217 [2024-10-08 16:15:40.473384] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59642 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59642 /var/tmp/spdk2.sock 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59642 ']' 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.150 16:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.409 [2024-10-08 16:15:41.588737] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:48.409 [2024-10-08 16:15:41.589220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59642 ] 00:07:48.667 [2024-10-08 16:15:41.782818] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.667 [2024-10-08 16:15:41.782933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.232 [2024-10-08 16:15:42.358422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.232 [2024-10-08 16:15:42.358500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.232 [2024-10-08 16:15:42.358512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.134 [2024-10-08 16:15:44.403782] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59624 has claimed it. 00:07:51.134 request: 00:07:51.134 { 00:07:51.134 "method": "framework_enable_cpumask_locks", 00:07:51.134 "req_id": 1 00:07:51.134 } 00:07:51.134 Got JSON-RPC error response 00:07:51.134 response: 00:07:51.134 { 00:07:51.134 "code": -32603, 00:07:51.134 "message": "Failed to claim CPU core: 2" 00:07:51.134 } 00:07:51.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59624 /var/tmp/spdk.sock 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59624 ']' 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.134 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59642 /var/tmp/spdk2.sock 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59642 ']' 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.394 16:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:51.959 00:07:51.959 real 0m5.140s 00:07:51.959 user 0m1.886s 00:07:51.959 sys 0m0.247s 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.959 16:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.959 ************************************ 00:07:51.959 END TEST locking_overlapped_coremask_via_rpc 00:07:51.959 ************************************ 00:07:51.959 16:15:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:51.959 16:15:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59624 ]] 00:07:51.959 16:15:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59624 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59624 ']' 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59624 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59624 00:07:51.959 killing process with pid 59624 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59624' 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59624 00:07:51.959 16:15:45 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59624 00:07:54.490 16:15:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59642 ]] 00:07:54.490 16:15:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59642 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59642 ']' 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59642 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59642 00:07:54.490 killing process with pid 59642 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59642' 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59642 00:07:54.490 16:15:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59642 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.775 Process with pid 59624 is not found 00:07:57.775 Process with pid 59642 is not found 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59624 ]] 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59624 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59624 ']' 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59624 00:07:57.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59624) - No such process 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59624 is not found' 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59642 ]] 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59642 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59642 ']' 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59642 00:07:57.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59642) - No such process 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59642 is not found' 00:07:57.775 16:15:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.775 00:07:57.775 real 0m57.190s 00:07:57.775 user 1m35.798s 00:07:57.775 sys 0m8.968s 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.775 16:15:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 ************************************ 00:07:57.775 END TEST cpu_locks 00:07:57.775 ************************************ 00:07:57.775 00:07:57.775 real 1m32.361s 00:07:57.775 user 2m42.966s 00:07:57.775 sys 0m13.806s 00:07:57.775 16:15:50 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.775 16:15:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 ************************************ 00:07:57.775 END TEST event 00:07:57.775 ************************************ 00:07:57.775 16:15:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.775 16:15:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.775 16:15:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.775 16:15:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 ************************************ 00:07:57.775 START TEST thread 00:07:57.775 ************************************ 00:07:57.775 16:15:50 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.775 * Looking for test storage... 00:07:57.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:57.775 16:15:50 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.775 16:15:50 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.775 16:15:50 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.775 16:15:50 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.775 16:15:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.775 16:15:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.775 16:15:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.775 16:15:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.775 16:15:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.775 16:15:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.775 16:15:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.775 16:15:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.775 16:15:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.775 16:15:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.775 16:15:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.775 16:15:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:57.775 16:15:50 thread -- scripts/common.sh@345 -- # : 1 00:07:57.775 16:15:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.775 16:15:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.775 16:15:50 thread -- scripts/common.sh@365 -- # decimal 1 00:07:57.775 16:15:50 thread -- scripts/common.sh@353 -- # local d=1 00:07:57.775 16:15:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.775 16:15:50 thread -- scripts/common.sh@355 -- # echo 1 00:07:57.775 16:15:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.775 16:15:50 thread -- scripts/common.sh@366 -- # decimal 2 00:07:57.775 16:15:50 thread -- scripts/common.sh@353 -- # local d=2 00:07:57.775 16:15:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.776 16:15:50 thread -- scripts/common.sh@355 -- # echo 2 00:07:57.776 16:15:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.776 16:15:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.776 16:15:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.776 16:15:50 thread -- scripts/common.sh@368 -- # return 0 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.776 --rc genhtml_branch_coverage=1 00:07:57.776 --rc genhtml_function_coverage=1 00:07:57.776 --rc genhtml_legend=1 00:07:57.776 --rc geninfo_all_blocks=1 00:07:57.776 --rc geninfo_unexecuted_blocks=1 00:07:57.776 00:07:57.776 ' 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.776 --rc genhtml_branch_coverage=1 00:07:57.776 --rc genhtml_function_coverage=1 00:07:57.776 --rc genhtml_legend=1 00:07:57.776 --rc geninfo_all_blocks=1 00:07:57.776 --rc geninfo_unexecuted_blocks=1 00:07:57.776 00:07:57.776 ' 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.776 --rc genhtml_branch_coverage=1 00:07:57.776 --rc genhtml_function_coverage=1 00:07:57.776 --rc genhtml_legend=1 00:07:57.776 --rc geninfo_all_blocks=1 00:07:57.776 --rc geninfo_unexecuted_blocks=1 00:07:57.776 00:07:57.776 ' 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.776 --rc genhtml_branch_coverage=1 00:07:57.776 --rc genhtml_function_coverage=1 00:07:57.776 --rc genhtml_legend=1 00:07:57.776 --rc geninfo_all_blocks=1 00:07:57.776 --rc geninfo_unexecuted_blocks=1 00:07:57.776 00:07:57.776 ' 00:07:57.776 16:15:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.776 16:15:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.776 ************************************ 00:07:57.776 START TEST thread_poller_perf 00:07:57.776 ************************************ 00:07:57.776 16:15:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:57.776 [2024-10-08 16:15:50.849939] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:57.776 [2024-10-08 16:15:50.851440] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59848 ] 00:07:57.776 [2024-10-08 16:15:51.037487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.344 [2024-10-08 16:15:51.364277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.344 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:59.724 [2024-10-08T16:15:53.046Z] ====================================== 00:07:59.724 [2024-10-08T16:15:53.046Z] busy:2213847259 (cyc) 00:07:59.724 [2024-10-08T16:15:53.046Z] total_run_count: 297000 00:07:59.724 [2024-10-08T16:15:53.046Z] tsc_hz: 2200000000 (cyc) 00:07:59.724 [2024-10-08T16:15:53.046Z] ====================================== 00:07:59.724 [2024-10-08T16:15:53.046Z] poller_cost: 7454 (cyc), 3388 (nsec) 00:07:59.724 00:07:59.724 real 0m2.055s 00:07:59.724 user 0m1.783s 00:07:59.724 sys 0m0.156s 00:07:59.724 16:15:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.724 16:15:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.724 ************************************ 00:07:59.724 END TEST thread_poller_perf 00:07:59.724 ************************************ 00:07:59.725 16:15:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:59.725 16:15:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:59.725 16:15:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.725 16:15:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.725 ************************************ 00:07:59.725 START TEST thread_poller_perf 00:07:59.725 ************************************ 00:07:59.725 16:15:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:59.725 [2024-10-08 16:15:52.955641] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:59.725 [2024-10-08 16:15:52.956030] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:07:59.983 [2024-10-08 16:15:53.128510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.241 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:00.241 [2024-10-08 16:15:53.401541] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.654 [2024-10-08T16:15:54.976Z] ====================================== 00:08:01.654 [2024-10-08T16:15:54.976Z] busy:2203797115 (cyc) 00:08:01.654 [2024-10-08T16:15:54.976Z] total_run_count: 3837000 00:08:01.654 [2024-10-08T16:15:54.976Z] tsc_hz: 2200000000 (cyc) 00:08:01.654 [2024-10-08T16:15:54.976Z] ====================================== 00:08:01.654 [2024-10-08T16:15:54.976Z] poller_cost: 574 (cyc), 260 (nsec) 00:08:01.654 00:08:01.654 real 0m1.953s 00:08:01.654 user 0m1.698s 00:08:01.654 sys 0m0.142s 00:08:01.654 ************************************ 00:08:01.654 END TEST thread_poller_perf 00:08:01.654 ************************************ 00:08:01.654 16:15:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.654 16:15:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.654 16:15:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:01.654 ************************************ 00:08:01.654 END TEST thread 00:08:01.654 ************************************ 00:08:01.654 00:08:01.654 real 0m4.316s 00:08:01.654 user 0m3.637s 00:08:01.654 sys 0m0.447s 00:08:01.654 16:15:54 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.654 16:15:54 thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.654 16:15:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:01.654 16:15:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:01.654 16:15:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.654 16:15:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.654 16:15:54 -- common/autotest_common.sh@10 -- # set +x 00:08:01.654 ************************************ 00:08:01.654 START TEST app_cmdline 00:08:01.654 ************************************ 00:08:01.654 16:15:54 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:01.913 * Looking for test storage... 00:08:01.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:01.913 16:15:55 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.913 16:15:55 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.913 16:15:55 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.913 16:15:55 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.913 16:15:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.914 16:15:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.914 --rc genhtml_branch_coverage=1 00:08:01.914 --rc genhtml_function_coverage=1 00:08:01.914 --rc genhtml_legend=1 00:08:01.914 --rc geninfo_all_blocks=1 00:08:01.914 --rc geninfo_unexecuted_blocks=1 00:08:01.914 00:08:01.914 ' 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.914 --rc genhtml_branch_coverage=1 00:08:01.914 --rc genhtml_function_coverage=1 00:08:01.914 --rc genhtml_legend=1 00:08:01.914 --rc geninfo_all_blocks=1 00:08:01.914 --rc geninfo_unexecuted_blocks=1 00:08:01.914 00:08:01.914 ' 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.914 --rc genhtml_branch_coverage=1 00:08:01.914 --rc genhtml_function_coverage=1 00:08:01.914 --rc genhtml_legend=1 00:08:01.914 --rc geninfo_all_blocks=1 00:08:01.914 --rc geninfo_unexecuted_blocks=1 00:08:01.914 00:08:01.914 ' 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.914 --rc genhtml_branch_coverage=1 00:08:01.914 --rc genhtml_function_coverage=1 00:08:01.914 --rc genhtml_legend=1 00:08:01.914 --rc geninfo_all_blocks=1 00:08:01.914 --rc geninfo_unexecuted_blocks=1 00:08:01.914 00:08:01.914 ' 00:08:01.914 16:15:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:01.914 16:15:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59985 00:08:01.914 16:15:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:01.914 16:15:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59985 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59985 ']' 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.914 16:15:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:02.173 [2024-10-08 16:15:55.311901] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:02.173 [2024-10-08 16:15:55.312424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:08:02.173 [2024-10-08 16:15:55.492804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.739 [2024-10-08 16:15:55.776253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.673 16:15:56 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.673 16:15:56 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:03.673 16:15:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:03.931 { 00:08:03.931 "version": "SPDK v25.01-pre git sha1 ba5b39cb2", 00:08:03.931 "fields": { 00:08:03.931 "major": 25, 00:08:03.931 "minor": 1, 00:08:03.931 "patch": 0, 00:08:03.931 "suffix": "-pre", 00:08:03.931 "commit": "ba5b39cb2" 00:08:03.931 } 00:08:03.931 } 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:03.931 16:15:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.931 16:15:57 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:04.189 request: 00:08:04.189 { 00:08:04.189 "method": "env_dpdk_get_mem_stats", 00:08:04.189 "req_id": 1 00:08:04.189 } 00:08:04.189 Got JSON-RPC error response 00:08:04.189 response: 00:08:04.189 { 00:08:04.189 "code": -32601, 00:08:04.189 "message": "Method not found" 00:08:04.189 } 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.189 16:15:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59985 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59985 ']' 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59985 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.189 16:15:57 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59985 00:08:04.190 killing process with pid 59985 00:08:04.190 16:15:57 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.190 16:15:57 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.190 16:15:57 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59985' 00:08:04.190 16:15:57 app_cmdline -- common/autotest_common.sh@969 -- # kill 59985 00:08:04.190 16:15:57 app_cmdline -- common/autotest_common.sh@974 -- # wait 59985 00:08:07.477 ************************************ 00:08:07.477 END TEST app_cmdline 00:08:07.477 ************************************ 00:08:07.477 00:08:07.477 real 0m5.249s 00:08:07.477 user 0m5.479s 00:08:07.477 sys 0m0.910s 00:08:07.477 16:16:00 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.477 16:16:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:07.477 16:16:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:07.477 16:16:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.477 16:16:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.477 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:08:07.477 ************************************ 00:08:07.477 START TEST version 00:08:07.477 ************************************ 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:07.477 * Looking for test storage... 00:08:07.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1681 -- # lcov --version 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:07.477 16:16:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.477 16:16:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.477 16:16:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.477 16:16:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.477 16:16:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.477 16:16:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.477 16:16:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.477 16:16:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.477 16:16:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.477 16:16:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.477 16:16:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.477 16:16:00 version -- scripts/common.sh@344 -- # case "$op" in 00:08:07.477 16:16:00 version -- scripts/common.sh@345 -- # : 1 00:08:07.477 16:16:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.477 16:16:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.477 16:16:00 version -- scripts/common.sh@365 -- # decimal 1 00:08:07.477 16:16:00 version -- scripts/common.sh@353 -- # local d=1 00:08:07.477 16:16:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.477 16:16:00 version -- scripts/common.sh@355 -- # echo 1 00:08:07.477 16:16:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.477 16:16:00 version -- scripts/common.sh@366 -- # decimal 2 00:08:07.477 16:16:00 version -- scripts/common.sh@353 -- # local d=2 00:08:07.477 16:16:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.477 16:16:00 version -- scripts/common.sh@355 -- # echo 2 00:08:07.477 16:16:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.477 16:16:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.477 16:16:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.477 16:16:00 version -- scripts/common.sh@368 -- # return 0 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.477 --rc genhtml_branch_coverage=1 00:08:07.477 --rc genhtml_function_coverage=1 00:08:07.477 --rc genhtml_legend=1 00:08:07.477 --rc geninfo_all_blocks=1 00:08:07.477 --rc geninfo_unexecuted_blocks=1 00:08:07.477 00:08:07.477 ' 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.477 --rc genhtml_branch_coverage=1 00:08:07.477 --rc genhtml_function_coverage=1 00:08:07.477 --rc genhtml_legend=1 00:08:07.477 --rc geninfo_all_blocks=1 00:08:07.477 --rc geninfo_unexecuted_blocks=1 00:08:07.477 00:08:07.477 ' 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.477 --rc genhtml_branch_coverage=1 00:08:07.477 --rc genhtml_function_coverage=1 00:08:07.477 --rc genhtml_legend=1 00:08:07.477 --rc geninfo_all_blocks=1 00:08:07.477 --rc geninfo_unexecuted_blocks=1 00:08:07.477 00:08:07.477 ' 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.477 --rc genhtml_branch_coverage=1 00:08:07.477 --rc genhtml_function_coverage=1 00:08:07.477 --rc genhtml_legend=1 00:08:07.477 --rc geninfo_all_blocks=1 00:08:07.477 --rc geninfo_unexecuted_blocks=1 00:08:07.477 00:08:07.477 ' 00:08:07.477 16:16:00 version -- app/version.sh@17 -- # get_header_version major 00:08:07.477 16:16:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # cut -f2 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.477 16:16:00 version -- app/version.sh@17 -- # major=25 00:08:07.477 16:16:00 version -- app/version.sh@18 -- # get_header_version minor 00:08:07.477 16:16:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # cut -f2 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.477 16:16:00 version -- app/version.sh@18 -- # minor=1 00:08:07.477 16:16:00 version -- app/version.sh@19 -- # get_header_version patch 00:08:07.477 16:16:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # cut -f2 00:08:07.477 16:16:00 version -- app/version.sh@19 -- # patch=0 00:08:07.477 16:16:00 version -- app/version.sh@20 -- # get_header_version suffix 00:08:07.477 16:16:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # cut -f2 00:08:07.477 16:16:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.477 16:16:00 version -- app/version.sh@20 -- # suffix=-pre 00:08:07.477 16:16:00 version -- app/version.sh@22 -- # version=25.1 00:08:07.477 16:16:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:07.477 16:16:00 version -- app/version.sh@28 -- # version=25.1rc0 00:08:07.477 16:16:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:07.477 16:16:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:07.477 16:16:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:07.477 16:16:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:07.477 ************************************ 00:08:07.477 END TEST version 00:08:07.477 ************************************ 00:08:07.477 00:08:07.477 real 0m0.270s 00:08:07.477 user 0m0.176s 00:08:07.477 sys 0m0.133s 00:08:07.477 16:16:00 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.477 16:16:00 version -- common/autotest_common.sh@10 -- # set +x 00:08:07.477 16:16:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:07.477 16:16:00 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:07.477 16:16:00 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:07.477 16:16:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.477 16:16:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.477 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:08:07.477 ************************************ 00:08:07.477 START TEST bdev_raid 00:08:07.477 ************************************ 00:08:07.477 16:16:00 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:07.477 * Looking for test storage... 00:08:07.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:07.477 16:16:00 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:07.477 16:16:00 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:08:07.477 16:16:00 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:07.477 16:16:00 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:07.477 16:16:00 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.478 16:16:00 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:07.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.478 --rc genhtml_branch_coverage=1 00:08:07.478 --rc genhtml_function_coverage=1 00:08:07.478 --rc genhtml_legend=1 00:08:07.478 --rc geninfo_all_blocks=1 00:08:07.478 --rc geninfo_unexecuted_blocks=1 00:08:07.478 00:08:07.478 ' 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:07.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.478 --rc genhtml_branch_coverage=1 00:08:07.478 --rc genhtml_function_coverage=1 00:08:07.478 --rc genhtml_legend=1 00:08:07.478 --rc geninfo_all_blocks=1 00:08:07.478 --rc geninfo_unexecuted_blocks=1 00:08:07.478 00:08:07.478 ' 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:07.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.478 --rc genhtml_branch_coverage=1 00:08:07.478 --rc genhtml_function_coverage=1 00:08:07.478 --rc genhtml_legend=1 00:08:07.478 --rc geninfo_all_blocks=1 00:08:07.478 --rc geninfo_unexecuted_blocks=1 00:08:07.478 00:08:07.478 ' 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:07.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.478 --rc genhtml_branch_coverage=1 00:08:07.478 --rc genhtml_function_coverage=1 00:08:07.478 --rc genhtml_legend=1 00:08:07.478 --rc geninfo_all_blocks=1 00:08:07.478 --rc geninfo_unexecuted_blocks=1 00:08:07.478 00:08:07.478 ' 00:08:07.478 16:16:00 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:07.478 16:16:00 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:07.478 16:16:00 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:07.478 16:16:00 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:07.478 16:16:00 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:07.478 16:16:00 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:07.478 16:16:00 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.478 16:16:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.736 ************************************ 00:08:07.736 START TEST raid1_resize_data_offset_test 00:08:07.736 ************************************ 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60178 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60178' 00:08:07.736 Process raid pid: 60178 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60178 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60178 ']' 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.736 16:16:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.736 [2024-10-08 16:16:00.895608] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:07.736 [2024-10-08 16:16:00.895991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.994 [2024-10-08 16:16:01.067140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.252 [2024-10-08 16:16:01.352416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.509 [2024-10-08 16:16:01.582905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.509 [2024-10-08 16:16:01.583246] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.767 16:16:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.767 16:16:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:08:08.767 16:16:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:08.767 16:16:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.767 16:16:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.767 malloc0 00:08:08.767 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.767 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:08.767 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.767 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.029 malloc1 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.029 null0 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.029 [2024-10-08 16:16:02.175248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:09.029 [2024-10-08 16:16:02.178026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:09.029 [2024-10-08 16:16:02.178099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:09.029 [2024-10-08 16:16:02.178369] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:09.029 [2024-10-08 16:16:02.178390] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:09.029 [2024-10-08 16:16:02.178814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:09.029 [2024-10-08 16:16:02.179091] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:09.029 [2024-10-08 16:16:02.179114] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:09.029 [2024-10-08 16:16:02.179408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.029 [2024-10-08 16:16:02.239464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.029 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 malloc2 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 [2024-10-08 16:16:02.855769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:09.625 [2024-10-08 16:16:02.873188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.625 [2024-10-08 16:16:02.875905] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60178 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60178 ']' 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60178 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.625 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60178 00:08:09.885 killing process with pid 60178 00:08:09.885 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.885 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.885 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60178' 00:08:09.885 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60178 00:08:09.885 16:16:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60178 00:08:09.885 [2024-10-08 16:16:02.965665] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.885 [2024-10-08 16:16:02.967230] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:09.885 [2024-10-08 16:16:02.967313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.885 [2024-10-08 16:16:02.967342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:09.885 [2024-10-08 16:16:02.997148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.885 [2024-10-08 16:16:02.997843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.885 [2024-10-08 16:16:02.997887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:11.842 [2024-10-08 16:16:04.831771] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.217 16:16:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:13.217 00:08:13.217 real 0m5.372s 00:08:13.217 user 0m5.202s 00:08:13.217 sys 0m0.817s 00:08:13.217 16:16:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.217 ************************************ 00:08:13.217 END TEST raid1_resize_data_offset_test 00:08:13.217 ************************************ 00:08:13.217 16:16:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 16:16:06 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:13.217 16:16:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.217 16:16:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.217 16:16:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 ************************************ 00:08:13.217 START TEST raid0_resize_superblock_test 00:08:13.218 ************************************ 00:08:13.218 Process raid pid: 60278 00:08:13.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60278 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60278' 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60278 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60278 ']' 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.218 16:16:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.218 [2024-10-08 16:16:06.355925] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:13.218 [2024-10-08 16:16:06.356424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.218 [2024-10-08 16:16:06.538023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.785 [2024-10-08 16:16:06.891580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.043 [2024-10-08 16:16:07.129316] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.043 [2024-10-08 16:16:07.129634] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.043 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.043 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:14.043 16:16:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:14.043 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.043 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.974 malloc0 00:08:14.974 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.974 16:16:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:14.974 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.974 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.974 [2024-10-08 16:16:07.973568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:14.974 [2024-10-08 16:16:07.973677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.975 [2024-10-08 16:16:07.973716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:14.975 [2024-10-08 16:16:07.973737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.975 [2024-10-08 16:16:07.976800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.975 [2024-10-08 16:16:07.976854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:14.975 pt0 00:08:14.975 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:14.975 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 d57ce348-56ac-4686-aadc-f39467ad9691 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 ec104ff8-40e9-45eb-9020-3096c0578c9e 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 b0a14022-b47d-4f73-80a5-60a78bca75b9 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 [2024-10-08 16:16:08.168489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev ec104ff8-40e9-45eb-9020-3096c0578c9e is claimed 00:08:14.975 [2024-10-08 16:16:08.168778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0a14022-b47d-4f73-80a5-60a78bca75b9 is claimed 00:08:14.975 [2024-10-08 16:16:08.168987] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:14.975 [2024-10-08 16:16:08.169015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:14.975 [2024-10-08 16:16:08.169363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.975 [2024-10-08 16:16:08.169655] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:14.975 [2024-10-08 16:16:08.169675] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:14.975 [2024-10-08 16:16:08.169879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.975 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 [2024-10-08 16:16:08.288905] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 [2024-10-08 16:16:08.341013] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:15.241 [2024-10-08 16:16:08.341068] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ec104ff8-40e9-45eb-9020-3096c0578c9e' was resized: old size 131072, new size 204800 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.241 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 [2024-10-08 16:16:08.348751] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:15.242 [2024-10-08 16:16:08.348783] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b0a14022-b47d-4f73-80a5-60a78bca75b9' was resized: old size 131072, new size 204800 00:08:15.242 [2024-10-08 16:16:08.348826] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 [2024-10-08 16:16:08.460919] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 [2024-10-08 16:16:08.508664] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:15.242 [2024-10-08 16:16:08.508775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:15.242 [2024-10-08 16:16:08.508797] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.242 [2024-10-08 16:16:08.508824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:15.242 [2024-10-08 16:16:08.509034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.242 [2024-10-08 16:16:08.509095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.242 [2024-10-08 16:16:08.509119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 [2024-10-08 16:16:08.516543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:15.242 [2024-10-08 16:16:08.516635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.242 [2024-10-08 16:16:08.516669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:15.242 [2024-10-08 16:16:08.516688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.242 [2024-10-08 16:16:08.520074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.242 pt0 00:08:15.242 [2024-10-08 16:16:08.520254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 [2024-10-08 16:16:08.522761] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ec104ff8-40e9-45eb-9020-3096c0578c9e 00:08:15.242 [2024-10-08 16:16:08.522883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev ec104ff8-40e9-45eb-9020-3096c0578c9e is claimed 00:08:15.242 [2024-10-08 16:16:08.523041] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b0a14022-b47d-4f73-80a5-60a78bca75b9 00:08:15.242 [2024-10-08 16:16:08.523076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0a14022-b47d-4f73-80a5-60a78bca75b9 is claimed 00:08:15.242 [2024-10-08 16:16:08.523232] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b0a14022-b47d-4f73-80a5-60a78bca75b9 (2) smaller than existing raid bdev Raid (3) 00:08:15.242 [2024-10-08 16:16:08.523265] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ec104ff8-40e9-45eb-9020-3096c0578c9e: File exists 00:08:15.242 [2024-10-08 16:16:08.523324] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:15.242 [2024-10-08 16:16:08.523344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:15.242 [2024-10-08 16:16:08.523863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:15.242 [2024-10-08 16:16:08.524220] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:15.242 [2024-10-08 16:16:08.524349] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:15.242 [2024-10-08 16:16:08.524863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:15.242 [2024-10-08 16:16:08.537020] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.242 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60278 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60278 ']' 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60278 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60278 00:08:15.500 killing process with pid 60278 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60278' 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60278 00:08:15.500 [2024-10-08 16:16:08.617383] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.500 16:16:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60278 00:08:15.500 [2024-10-08 16:16:08.617513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.500 [2024-10-08 16:16:08.617612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.500 [2024-10-08 16:16:08.617629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:16.875 [2024-10-08 16:16:10.078170] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.312 ************************************ 00:08:18.312 END TEST raid0_resize_superblock_test 00:08:18.312 ************************************ 00:08:18.312 16:16:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:18.312 00:08:18.312 real 0m5.263s 00:08:18.312 user 0m5.371s 00:08:18.312 sys 0m0.868s 00:08:18.312 16:16:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.312 16:16:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.312 16:16:11 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:18.312 16:16:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:18.312 16:16:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.312 16:16:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.312 ************************************ 00:08:18.312 START TEST raid1_resize_superblock_test 00:08:18.312 ************************************ 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:18.312 Process raid pid: 60382 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60382 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60382' 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60382 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60382 ']' 00:08:18.312 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.313 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.313 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.313 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.313 16:16:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.571 [2024-10-08 16:16:11.661825] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:18.571 [2024-10-08 16:16:11.662028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.571 [2024-10-08 16:16:11.834845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.828 [2024-10-08 16:16:12.122434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.086 [2024-10-08 16:16:12.357624] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.086 [2024-10-08 16:16:12.357677] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.651 16:16:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.651 16:16:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:19.651 16:16:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:19.651 16:16:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.651 16:16:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.217 malloc0 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.217 [2024-10-08 16:16:13.307093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:20.217 [2024-10-08 16:16:13.307366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.217 [2024-10-08 16:16:13.307411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.217 [2024-10-08 16:16:13.307432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.217 [2024-10-08 16:16:13.310614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.217 [2024-10-08 16:16:13.310667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:20.217 pt0 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.217 78b5d68d-65cd-4431-9273-37331fb7eba2 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.217 f51d7f95-ef4d-4a65-8a43-12b799872b82 00:08:20.217 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.218 7f121b9f-164b-470e-b981-6813c230a20a 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.218 [2024-10-08 16:16:13.512319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f51d7f95-ef4d-4a65-8a43-12b799872b82 is claimed 00:08:20.218 [2024-10-08 16:16:13.512451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7f121b9f-164b-470e-b981-6813c230a20a is claimed 00:08:20.218 [2024-10-08 16:16:13.512727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:20.218 [2024-10-08 16:16:13.512756] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:20.218 [2024-10-08 16:16:13.513105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.218 [2024-10-08 16:16:13.513367] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:20.218 [2024-10-08 16:16:13.513385] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:20.218 [2024-10-08 16:16:13.513654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.218 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:20.476 [2024-10-08 16:16:13.636617] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 [2024-10-08 16:16:13.684663] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:20.476 [2024-10-08 16:16:13.684699] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f51d7f95-ef4d-4a65-8a43-12b799872b82' was resized: old size 131072, new size 204800 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 [2024-10-08 16:16:13.692487] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:20.476 [2024-10-08 16:16:13.692515] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7f121b9f-164b-470e-b981-6813c230a20a' was resized: old size 131072, new size 204800 00:08:20.476 [2024-10-08 16:16:13.692623] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:20.476 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.735 [2024-10-08 16:16:13.816649] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.735 [2024-10-08 16:16:13.868406] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:20.735 [2024-10-08 16:16:13.868560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:20.735 [2024-10-08 16:16:13.868620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:20.735 [2024-10-08 16:16:13.868873] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.735 [2024-10-08 16:16:13.869202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.735 [2024-10-08 16:16:13.869313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.735 [2024-10-08 16:16:13.869357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.735 [2024-10-08 16:16:13.876329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:20.735 [2024-10-08 16:16:13.876427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.735 [2024-10-08 16:16:13.876458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:20.735 [2024-10-08 16:16:13.876475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.735 [2024-10-08 16:16:13.879837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.735 [2024-10-08 16:16:13.879914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:20.735 pt0 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.735 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.735 [2024-10-08 16:16:13.882492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f51d7f95-ef4d-4a65-8a43-12b799872b82 00:08:20.735 [2024-10-08 16:16:13.882598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f51d7f95-ef4d-4a65-8a43-12b799872b82 is claimed 00:08:20.735 [2024-10-08 16:16:13.882746] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7f121b9f-164b-470e-b981-6813c230a20a 00:08:20.736 [2024-10-08 16:16:13.882868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7f121b9f-164b-470e-b981-6813c230a20a is claimed 00:08:20.736 [2024-10-08 16:16:13.883061] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7f121b9f-164b-470e-b981-6813c230a20a (2) smaller than existing raid bdev Raid (3) 00:08:20.736 [2024-10-08 16:16:13.883096] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f51d7f95-ef4d-4a65-8a43-12b799872b82: File exists 00:08:20.736 [2024-10-08 16:16:13.883155] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:20.736 [2024-10-08 16:16:13.883175] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:20.736 [2024-10-08 16:16:13.883503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:20.736 [2024-10-08 16:16:13.883765] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:20.736 [2024-10-08 16:16:13.883788] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:20.736 [2024-10-08 16:16:13.883977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:20.736 [2024-10-08 16:16:13.900800] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60382 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60382 ']' 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60382 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.736 16:16:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60382 00:08:20.736 killing process with pid 60382 00:08:20.736 16:16:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.736 16:16:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.736 16:16:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60382' 00:08:20.736 16:16:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60382 00:08:20.736 16:16:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60382 00:08:20.736 [2024-10-08 16:16:14.003501] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.736 [2024-10-08 16:16:14.003676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.736 [2024-10-08 16:16:14.003764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.736 [2024-10-08 16:16:14.003779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:22.637 [2024-10-08 16:16:15.476774] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.573 16:16:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:23.573 ************************************ 00:08:23.573 END TEST raid1_resize_superblock_test 00:08:23.573 ************************************ 00:08:23.573 00:08:23.573 real 0m5.265s 00:08:23.573 user 0m5.484s 00:08:23.573 sys 0m0.797s 00:08:23.573 16:16:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.573 16:16:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.573 16:16:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:23.573 16:16:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:23.573 16:16:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:23.573 16:16:16 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:23.573 16:16:16 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:23.573 16:16:16 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:23.573 16:16:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.573 16:16:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.573 16:16:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.831 ************************************ 00:08:23.831 START TEST raid_function_test_raid0 00:08:23.831 ************************************ 00:08:23.831 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:08:23.831 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:23.832 Process raid pid: 60490 00:08:23.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60490 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60490' 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60490 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60490 ']' 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.832 16:16:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 [2024-10-08 16:16:17.022551] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:23.832 [2024-10-08 16:16:17.023095] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.090 [2024-10-08 16:16:17.201077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.347 [2024-10-08 16:16:17.483749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.605 [2024-10-08 16:16:17.712009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.605 [2024-10-08 16:16:17.712311] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.863 16:16:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.863 16:16:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:08:24.863 16:16:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:24.863 16:16:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.863 16:16:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:24.863 Base_1 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:24.863 Base_2 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:24.863 [2024-10-08 16:16:18.086958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:24.863 [2024-10-08 16:16:18.089911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:24.863 [2024-10-08 16:16:18.090026] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:24.863 [2024-10-08 16:16:18.090049] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:24.863 [2024-10-08 16:16:18.090435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:24.863 [2024-10-08 16:16:18.090819] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:24.863 [2024-10-08 16:16:18.090960] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:24.863 [2024-10-08 16:16:18.091399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:24.863 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:25.122 [2024-10-08 16:16:18.411667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:25.122 /dev/nbd0 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.379 1+0 records in 00:08:25.379 1+0 records out 00:08:25.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383537 s, 10.7 MB/s 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:25.379 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.637 { 00:08:25.637 "nbd_device": "/dev/nbd0", 00:08:25.637 "bdev_name": "raid" 00:08:25.637 } 00:08:25.637 ]' 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.637 { 00:08:25.637 "nbd_device": "/dev/nbd0", 00:08:25.637 "bdev_name": "raid" 00:08:25.637 } 00:08:25.637 ]' 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:25.637 4096+0 records in 00:08:25.637 4096+0 records out 00:08:25.637 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0356046 s, 58.9 MB/s 00:08:25.637 16:16:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:26.204 4096+0 records in 00:08:26.204 4096+0 records out 00:08:26.204 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.340961 s, 6.2 MB/s 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:26.204 128+0 records in 00:08:26.204 128+0 records out 00:08:26.204 65536 bytes (66 kB, 64 KiB) copied, 0.00110293 s, 59.4 MB/s 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:26.204 2035+0 records in 00:08:26.204 2035+0 records out 00:08:26.204 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00959035 s, 109 MB/s 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:26.204 456+0 records in 00:08:26.204 456+0 records out 00:08:26.204 233472 bytes (233 kB, 228 KiB) copied, 0.00286845 s, 81.4 MB/s 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.204 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:26.462 [2024-10-08 16:16:19.622511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:26.462 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60490 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60490 ']' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60490 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.721 16:16:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60490 00:08:26.721 killing process with pid 60490 00:08:26.721 16:16:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.721 16:16:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.721 16:16:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60490' 00:08:26.721 16:16:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60490 00:08:26.721 [2024-10-08 16:16:20.008847] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.721 16:16:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60490 00:08:26.721 [2024-10-08 16:16:20.009027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.721 [2024-10-08 16:16:20.009114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.721 [2024-10-08 16:16:20.009136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:27.019 [2024-10-08 16:16:20.224863] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.394 ************************************ 00:08:28.394 END TEST raid_function_test_raid0 00:08:28.394 ************************************ 00:08:28.394 16:16:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:28.394 00:08:28.395 real 0m4.679s 00:08:28.395 user 0m5.514s 00:08:28.395 sys 0m1.119s 00:08:28.395 16:16:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.395 16:16:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 16:16:21 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:28.395 16:16:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.395 16:16:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.395 16:16:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 ************************************ 00:08:28.395 START TEST raid_function_test_concat 00:08:28.395 ************************************ 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60625 00:08:28.395 Process raid pid: 60625 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60625' 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60625 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60625 ']' 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.395 16:16:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:28.653 [2024-10-08 16:16:21.762893] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:28.653 [2024-10-08 16:16:21.763095] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.653 [2024-10-08 16:16:21.938745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.911 [2024-10-08 16:16:22.219424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.169 [2024-10-08 16:16:22.447087] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.169 [2024-10-08 16:16:22.447161] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.428 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.428 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:08:29.428 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:29.428 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.428 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:29.687 Base_1 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:29.687 Base_2 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:29.687 [2024-10-08 16:16:22.834974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:29.687 [2024-10-08 16:16:22.837590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:29.687 [2024-10-08 16:16:22.837690] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:29.687 [2024-10-08 16:16:22.837714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:29.687 [2024-10-08 16:16:22.838067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:29.687 [2024-10-08 16:16:22.838312] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:29.687 [2024-10-08 16:16:22.838340] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:29.687 [2024-10-08 16:16:22.838573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:29.687 16:16:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:29.945 [2024-10-08 16:16:23.195186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:29.945 /dev/nbd0 00:08:29.945 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:29.945 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:29.945 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:29.945 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:08:29.945 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:29.945 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.946 1+0 records in 00:08:29.946 1+0 records out 00:08:29.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365557 s, 11.2 MB/s 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:29.946 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:30.513 { 00:08:30.513 "nbd_device": "/dev/nbd0", 00:08:30.513 "bdev_name": "raid" 00:08:30.513 } 00:08:30.513 ]' 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:30.513 { 00:08:30.513 "nbd_device": "/dev/nbd0", 00:08:30.513 "bdev_name": "raid" 00:08:30.513 } 00:08:30.513 ]' 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:30.513 4096+0 records in 00:08:30.513 4096+0 records out 00:08:30.513 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0290715 s, 72.1 MB/s 00:08:30.513 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:30.771 4096+0 records in 00:08:30.771 4096+0 records out 00:08:30.771 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.351734 s, 6.0 MB/s 00:08:30.771 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:30.771 16:16:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:30.771 128+0 records in 00:08:30.771 128+0 records out 00:08:30.771 65536 bytes (66 kB, 64 KiB) copied, 0.000479836 s, 137 MB/s 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:30.771 2035+0 records in 00:08:30.771 2035+0 records out 00:08:30.771 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.010044 s, 104 MB/s 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:30.771 456+0 records in 00:08:30.771 456+0 records out 00:08:30.771 233472 bytes (233 kB, 228 KiB) copied, 0.0022704 s, 103 MB/s 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:30.771 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.031 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:31.324 [2024-10-08 16:16:24.403178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:31.324 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60625 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60625 ']' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60625 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60625 00:08:31.584 killing process with pid 60625 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60625' 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60625 00:08:31.584 16:16:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60625 00:08:31.584 [2024-10-08 16:16:24.765470] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.584 [2024-10-08 16:16:24.765712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.584 [2024-10-08 16:16:24.765789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.584 [2024-10-08 16:16:24.765820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:31.842 [2024-10-08 16:16:24.975199] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.218 16:16:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:33.218 00:08:33.218 real 0m4.668s 00:08:33.218 user 0m5.509s 00:08:33.218 sys 0m1.115s 00:08:33.218 16:16:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.218 ************************************ 00:08:33.218 END TEST raid_function_test_concat 00:08:33.218 16:16:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:33.218 ************************************ 00:08:33.218 16:16:26 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:33.218 16:16:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.218 16:16:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.218 16:16:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.218 ************************************ 00:08:33.218 START TEST raid0_resize_test 00:08:33.218 ************************************ 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:33.218 Process raid pid: 60759 00:08:33.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60759 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60759' 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60759 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60759 ']' 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.218 16:16:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.218 [2024-10-08 16:16:26.467280] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:33.218 [2024-10-08 16:16:26.467731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.477 [2024-10-08 16:16:26.644151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.747 [2024-10-08 16:16:26.932490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.005 [2024-10-08 16:16:27.167104] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.005 [2024-10-08 16:16:27.167433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 Base_1 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 Base_2 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 [2024-10-08 16:16:27.508490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:34.263 [2024-10-08 16:16:27.511255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:34.263 [2024-10-08 16:16:27.511371] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:34.263 [2024-10-08 16:16:27.511391] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:34.263 [2024-10-08 16:16:27.511769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:34.263 [2024-10-08 16:16:27.511946] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:34.263 [2024-10-08 16:16:27.511970] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:34.263 [2024-10-08 16:16:27.512221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 [2024-10-08 16:16:27.516436] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.263 [2024-10-08 16:16:27.516475] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:34.263 true 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:34.263 [2024-10-08 16:16:27.528696] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 [2024-10-08 16:16:27.576588] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.263 [2024-10-08 16:16:27.576659] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:34.263 [2024-10-08 16:16:27.576717] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:34.263 true 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.263 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:34.521 [2024-10-08 16:16:27.588779] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60759 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60759 ']' 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60759 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60759 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.521 killing process with pid 60759 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60759' 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60759 00:08:34.521 16:16:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60759 00:08:34.521 [2024-10-08 16:16:27.664068] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.521 [2024-10-08 16:16:27.664237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.521 [2024-10-08 16:16:27.664340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.521 [2024-10-08 16:16:27.664357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:34.521 [2024-10-08 16:16:27.681693] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.936 ************************************ 00:08:35.936 END TEST raid0_resize_test 00:08:35.936 ************************************ 00:08:35.936 16:16:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:35.936 00:08:35.936 real 0m2.673s 00:08:35.936 user 0m2.852s 00:08:35.936 sys 0m0.478s 00:08:35.936 16:16:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.936 16:16:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.936 16:16:29 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:35.936 16:16:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:35.936 16:16:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.936 16:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.936 ************************************ 00:08:35.936 START TEST raid1_resize_test 00:08:35.936 ************************************ 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:35.936 Process raid pid: 60821 00:08:35.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60821 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60821' 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60821 00:08:35.936 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60821 ']' 00:08:35.937 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.937 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.937 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.937 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.937 16:16:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 [2024-10-08 16:16:29.176392] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:35.937 [2024-10-08 16:16:29.176905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.202 [2024-10-08 16:16:29.361001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.466 [2024-10-08 16:16:29.671203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.731 [2024-10-08 16:16:29.925123] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.731 [2024-10-08 16:16:29.925208] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 Base_1 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 Base_2 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 [2024-10-08 16:16:30.184843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:36.991 [2024-10-08 16:16:30.187471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:36.991 [2024-10-08 16:16:30.187815] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:36.991 [2024-10-08 16:16:30.187847] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:36.991 [2024-10-08 16:16:30.188190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:36.991 [2024-10-08 16:16:30.188363] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:36.991 [2024-10-08 16:16:30.188380] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:36.991 [2024-10-08 16:16:30.188591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 [2024-10-08 16:16:30.192779] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:36.991 [2024-10-08 16:16:30.192815] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:36.991 true 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 [2024-10-08 16:16:30.205018] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 [2024-10-08 16:16:30.252962] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:36.991 [2024-10-08 16:16:30.253036] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:36.991 [2024-10-08 16:16:30.253092] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:36.991 true 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 [2024-10-08 16:16:30.265114] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.991 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60821 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60821 ']' 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60821 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60821 00:08:37.251 killing process with pid 60821 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60821' 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60821 00:08:37.251 16:16:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60821 00:08:37.251 [2024-10-08 16:16:30.348834] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.251 [2024-10-08 16:16:30.349008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.251 [2024-10-08 16:16:30.349778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.251 [2024-10-08 16:16:30.349817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:37.251 [2024-10-08 16:16:30.366059] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.691 16:16:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:38.691 00:08:38.691 real 0m2.612s 00:08:38.691 user 0m2.785s 00:08:38.691 sys 0m0.450s 00:08:38.691 16:16:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.691 ************************************ 00:08:38.691 END TEST raid1_resize_test 00:08:38.691 ************************************ 00:08:38.691 16:16:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 16:16:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:38.691 16:16:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:38.691 16:16:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:38.691 16:16:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:38.691 16:16:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.691 16:16:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 ************************************ 00:08:38.691 START TEST raid_state_function_test 00:08:38.691 ************************************ 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60883 00:08:38.691 Process raid pid: 60883 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60883' 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60883 00:08:38.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60883 ']' 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.691 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.692 16:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.692 [2024-10-08 16:16:31.864776] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:38.692 [2024-10-08 16:16:31.864982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.951 [2024-10-08 16:16:32.033910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.209 [2024-10-08 16:16:32.310092] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.468 [2024-10-08 16:16:32.539434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.468 [2024-10-08 16:16:32.539510] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.728 [2024-10-08 16:16:32.811168] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.728 [2024-10-08 16:16:32.811539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.728 [2024-10-08 16:16:32.811570] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.728 [2024-10-08 16:16:32.811593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.728 "name": "Existed_Raid", 00:08:39.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.728 "strip_size_kb": 64, 00:08:39.728 "state": "configuring", 00:08:39.728 "raid_level": "raid0", 00:08:39.728 "superblock": false, 00:08:39.728 "num_base_bdevs": 2, 00:08:39.728 "num_base_bdevs_discovered": 0, 00:08:39.728 "num_base_bdevs_operational": 2, 00:08:39.728 "base_bdevs_list": [ 00:08:39.728 { 00:08:39.728 "name": "BaseBdev1", 00:08:39.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.728 "is_configured": false, 00:08:39.728 "data_offset": 0, 00:08:39.728 "data_size": 0 00:08:39.728 }, 00:08:39.728 { 00:08:39.728 "name": "BaseBdev2", 00:08:39.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.728 "is_configured": false, 00:08:39.728 "data_offset": 0, 00:08:39.728 "data_size": 0 00:08:39.728 } 00:08:39.728 ] 00:08:39.728 }' 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.728 16:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.296 [2024-10-08 16:16:33.319241] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.296 [2024-10-08 16:16:33.319327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.296 [2024-10-08 16:16:33.331187] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.296 [2024-10-08 16:16:33.331384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.296 [2024-10-08 16:16:33.331543] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.296 [2024-10-08 16:16:33.331710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.296 [2024-10-08 16:16:33.397500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.296 BaseBdev1 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.296 [ 00:08:40.296 { 00:08:40.296 "name": "BaseBdev1", 00:08:40.296 "aliases": [ 00:08:40.296 "0a481929-2d20-4b83-a315-065fa1d5de5c" 00:08:40.296 ], 00:08:40.296 "product_name": "Malloc disk", 00:08:40.296 "block_size": 512, 00:08:40.296 "num_blocks": 65536, 00:08:40.296 "uuid": "0a481929-2d20-4b83-a315-065fa1d5de5c", 00:08:40.296 "assigned_rate_limits": { 00:08:40.296 "rw_ios_per_sec": 0, 00:08:40.296 "rw_mbytes_per_sec": 0, 00:08:40.296 "r_mbytes_per_sec": 0, 00:08:40.296 "w_mbytes_per_sec": 0 00:08:40.296 }, 00:08:40.296 "claimed": true, 00:08:40.296 "claim_type": "exclusive_write", 00:08:40.296 "zoned": false, 00:08:40.296 "supported_io_types": { 00:08:40.296 "read": true, 00:08:40.296 "write": true, 00:08:40.296 "unmap": true, 00:08:40.296 "flush": true, 00:08:40.296 "reset": true, 00:08:40.296 "nvme_admin": false, 00:08:40.296 "nvme_io": false, 00:08:40.296 "nvme_io_md": false, 00:08:40.296 "write_zeroes": true, 00:08:40.296 "zcopy": true, 00:08:40.296 "get_zone_info": false, 00:08:40.296 "zone_management": false, 00:08:40.296 "zone_append": false, 00:08:40.296 "compare": false, 00:08:40.296 "compare_and_write": false, 00:08:40.296 "abort": true, 00:08:40.296 "seek_hole": false, 00:08:40.296 "seek_data": false, 00:08:40.296 "copy": true, 00:08:40.296 "nvme_iov_md": false 00:08:40.296 }, 00:08:40.296 "memory_domains": [ 00:08:40.296 { 00:08:40.296 "dma_device_id": "system", 00:08:40.296 "dma_device_type": 1 00:08:40.296 }, 00:08:40.296 { 00:08:40.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.296 "dma_device_type": 2 00:08:40.296 } 00:08:40.296 ], 00:08:40.296 "driver_specific": {} 00:08:40.296 } 00:08:40.296 ] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.296 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.297 "name": "Existed_Raid", 00:08:40.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.297 "strip_size_kb": 64, 00:08:40.297 "state": "configuring", 00:08:40.297 "raid_level": "raid0", 00:08:40.297 "superblock": false, 00:08:40.297 "num_base_bdevs": 2, 00:08:40.297 "num_base_bdevs_discovered": 1, 00:08:40.297 "num_base_bdevs_operational": 2, 00:08:40.297 "base_bdevs_list": [ 00:08:40.297 { 00:08:40.297 "name": "BaseBdev1", 00:08:40.297 "uuid": "0a481929-2d20-4b83-a315-065fa1d5de5c", 00:08:40.297 "is_configured": true, 00:08:40.297 "data_offset": 0, 00:08:40.297 "data_size": 65536 00:08:40.297 }, 00:08:40.297 { 00:08:40.297 "name": "BaseBdev2", 00:08:40.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.297 "is_configured": false, 00:08:40.297 "data_offset": 0, 00:08:40.297 "data_size": 0 00:08:40.297 } 00:08:40.297 ] 00:08:40.297 }' 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.297 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.863 [2024-10-08 16:16:33.941766] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.863 [2024-10-08 16:16:33.941870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.863 [2024-10-08 16:16:33.949701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.863 [2024-10-08 16:16:33.952683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.863 [2024-10-08 16:16:33.952742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.863 16:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.863 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.863 "name": "Existed_Raid", 00:08:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.863 "strip_size_kb": 64, 00:08:40.863 "state": "configuring", 00:08:40.863 "raid_level": "raid0", 00:08:40.863 "superblock": false, 00:08:40.863 "num_base_bdevs": 2, 00:08:40.863 "num_base_bdevs_discovered": 1, 00:08:40.863 "num_base_bdevs_operational": 2, 00:08:40.863 "base_bdevs_list": [ 00:08:40.863 { 00:08:40.863 "name": "BaseBdev1", 00:08:40.863 "uuid": "0a481929-2d20-4b83-a315-065fa1d5de5c", 00:08:40.863 "is_configured": true, 00:08:40.863 "data_offset": 0, 00:08:40.863 "data_size": 65536 00:08:40.863 }, 00:08:40.863 { 00:08:40.863 "name": "BaseBdev2", 00:08:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.863 "is_configured": false, 00:08:40.863 "data_offset": 0, 00:08:40.863 "data_size": 0 00:08:40.863 } 00:08:40.863 ] 00:08:40.863 }' 00:08:40.863 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.863 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.429 [2024-10-08 16:16:34.550692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.429 [2024-10-08 16:16:34.550782] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.429 [2024-10-08 16:16:34.550801] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:41.429 [2024-10-08 16:16:34.551152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.429 [2024-10-08 16:16:34.551395] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.429 [2024-10-08 16:16:34.551422] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.429 BaseBdev2 00:08:41.429 [2024-10-08 16:16:34.551843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.429 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.429 [ 00:08:41.429 { 00:08:41.429 "name": "BaseBdev2", 00:08:41.429 "aliases": [ 00:08:41.429 "3ea9e49f-03c3-43a4-bdb8-5e40d2329107" 00:08:41.429 ], 00:08:41.429 "product_name": "Malloc disk", 00:08:41.429 "block_size": 512, 00:08:41.429 "num_blocks": 65536, 00:08:41.429 "uuid": "3ea9e49f-03c3-43a4-bdb8-5e40d2329107", 00:08:41.429 "assigned_rate_limits": { 00:08:41.429 "rw_ios_per_sec": 0, 00:08:41.429 "rw_mbytes_per_sec": 0, 00:08:41.429 "r_mbytes_per_sec": 0, 00:08:41.429 "w_mbytes_per_sec": 0 00:08:41.429 }, 00:08:41.429 "claimed": true, 00:08:41.429 "claim_type": "exclusive_write", 00:08:41.429 "zoned": false, 00:08:41.429 "supported_io_types": { 00:08:41.429 "read": true, 00:08:41.429 "write": true, 00:08:41.429 "unmap": true, 00:08:41.429 "flush": true, 00:08:41.429 "reset": true, 00:08:41.429 "nvme_admin": false, 00:08:41.429 "nvme_io": false, 00:08:41.429 "nvme_io_md": false, 00:08:41.429 "write_zeroes": true, 00:08:41.429 "zcopy": true, 00:08:41.429 "get_zone_info": false, 00:08:41.429 "zone_management": false, 00:08:41.429 "zone_append": false, 00:08:41.429 "compare": false, 00:08:41.429 "compare_and_write": false, 00:08:41.429 "abort": true, 00:08:41.430 "seek_hole": false, 00:08:41.430 "seek_data": false, 00:08:41.430 "copy": true, 00:08:41.430 "nvme_iov_md": false 00:08:41.430 }, 00:08:41.430 "memory_domains": [ 00:08:41.430 { 00:08:41.430 "dma_device_id": "system", 00:08:41.430 "dma_device_type": 1 00:08:41.430 }, 00:08:41.430 { 00:08:41.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.430 "dma_device_type": 2 00:08:41.430 } 00:08:41.430 ], 00:08:41.430 "driver_specific": {} 00:08:41.430 } 00:08:41.430 ] 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.430 "name": "Existed_Raid", 00:08:41.430 "uuid": "ee77475c-96c7-472d-8183-50cfcbbfb926", 00:08:41.430 "strip_size_kb": 64, 00:08:41.430 "state": "online", 00:08:41.430 "raid_level": "raid0", 00:08:41.430 "superblock": false, 00:08:41.430 "num_base_bdevs": 2, 00:08:41.430 "num_base_bdevs_discovered": 2, 00:08:41.430 "num_base_bdevs_operational": 2, 00:08:41.430 "base_bdevs_list": [ 00:08:41.430 { 00:08:41.430 "name": "BaseBdev1", 00:08:41.430 "uuid": "0a481929-2d20-4b83-a315-065fa1d5de5c", 00:08:41.430 "is_configured": true, 00:08:41.430 "data_offset": 0, 00:08:41.430 "data_size": 65536 00:08:41.430 }, 00:08:41.430 { 00:08:41.430 "name": "BaseBdev2", 00:08:41.430 "uuid": "3ea9e49f-03c3-43a4-bdb8-5e40d2329107", 00:08:41.430 "is_configured": true, 00:08:41.430 "data_offset": 0, 00:08:41.430 "data_size": 65536 00:08:41.430 } 00:08:41.430 ] 00:08:41.430 }' 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.430 16:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.998 [2024-10-08 16:16:35.127359] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.998 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.998 "name": "Existed_Raid", 00:08:41.998 "aliases": [ 00:08:41.998 "ee77475c-96c7-472d-8183-50cfcbbfb926" 00:08:41.998 ], 00:08:41.998 "product_name": "Raid Volume", 00:08:41.998 "block_size": 512, 00:08:41.998 "num_blocks": 131072, 00:08:41.998 "uuid": "ee77475c-96c7-472d-8183-50cfcbbfb926", 00:08:41.998 "assigned_rate_limits": { 00:08:41.998 "rw_ios_per_sec": 0, 00:08:41.998 "rw_mbytes_per_sec": 0, 00:08:41.998 "r_mbytes_per_sec": 0, 00:08:41.998 "w_mbytes_per_sec": 0 00:08:41.998 }, 00:08:41.998 "claimed": false, 00:08:41.998 "zoned": false, 00:08:41.998 "supported_io_types": { 00:08:41.998 "read": true, 00:08:41.998 "write": true, 00:08:41.998 "unmap": true, 00:08:41.998 "flush": true, 00:08:41.998 "reset": true, 00:08:41.998 "nvme_admin": false, 00:08:41.998 "nvme_io": false, 00:08:41.998 "nvme_io_md": false, 00:08:41.998 "write_zeroes": true, 00:08:41.998 "zcopy": false, 00:08:41.998 "get_zone_info": false, 00:08:41.998 "zone_management": false, 00:08:41.998 "zone_append": false, 00:08:41.998 "compare": false, 00:08:41.998 "compare_and_write": false, 00:08:41.998 "abort": false, 00:08:41.998 "seek_hole": false, 00:08:41.998 "seek_data": false, 00:08:41.998 "copy": false, 00:08:41.998 "nvme_iov_md": false 00:08:41.998 }, 00:08:41.998 "memory_domains": [ 00:08:41.998 { 00:08:41.998 "dma_device_id": "system", 00:08:41.998 "dma_device_type": 1 00:08:41.998 }, 00:08:41.998 { 00:08:41.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.998 "dma_device_type": 2 00:08:41.998 }, 00:08:41.998 { 00:08:41.998 "dma_device_id": "system", 00:08:41.998 "dma_device_type": 1 00:08:41.998 }, 00:08:41.998 { 00:08:41.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.999 "dma_device_type": 2 00:08:41.999 } 00:08:41.999 ], 00:08:41.999 "driver_specific": { 00:08:41.999 "raid": { 00:08:41.999 "uuid": "ee77475c-96c7-472d-8183-50cfcbbfb926", 00:08:41.999 "strip_size_kb": 64, 00:08:41.999 "state": "online", 00:08:41.999 "raid_level": "raid0", 00:08:41.999 "superblock": false, 00:08:41.999 "num_base_bdevs": 2, 00:08:41.999 "num_base_bdevs_discovered": 2, 00:08:41.999 "num_base_bdevs_operational": 2, 00:08:41.999 "base_bdevs_list": [ 00:08:41.999 { 00:08:41.999 "name": "BaseBdev1", 00:08:41.999 "uuid": "0a481929-2d20-4b83-a315-065fa1d5de5c", 00:08:41.999 "is_configured": true, 00:08:41.999 "data_offset": 0, 00:08:41.999 "data_size": 65536 00:08:41.999 }, 00:08:41.999 { 00:08:41.999 "name": "BaseBdev2", 00:08:41.999 "uuid": "3ea9e49f-03c3-43a4-bdb8-5e40d2329107", 00:08:41.999 "is_configured": true, 00:08:41.999 "data_offset": 0, 00:08:41.999 "data_size": 65536 00:08:41.999 } 00:08:41.999 ] 00:08:41.999 } 00:08:41.999 } 00:08:41.999 }' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.999 BaseBdev2' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.999 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.268 [2024-10-08 16:16:35.363083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.268 [2024-10-08 16:16:35.363370] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.268 [2024-10-08 16:16:35.363476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.268 "name": "Existed_Raid", 00:08:42.268 "uuid": "ee77475c-96c7-472d-8183-50cfcbbfb926", 00:08:42.268 "strip_size_kb": 64, 00:08:42.268 "state": "offline", 00:08:42.268 "raid_level": "raid0", 00:08:42.268 "superblock": false, 00:08:42.268 "num_base_bdevs": 2, 00:08:42.268 "num_base_bdevs_discovered": 1, 00:08:42.268 "num_base_bdevs_operational": 1, 00:08:42.268 "base_bdevs_list": [ 00:08:42.268 { 00:08:42.268 "name": null, 00:08:42.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.268 "is_configured": false, 00:08:42.268 "data_offset": 0, 00:08:42.268 "data_size": 65536 00:08:42.268 }, 00:08:42.268 { 00:08:42.268 "name": "BaseBdev2", 00:08:42.268 "uuid": "3ea9e49f-03c3-43a4-bdb8-5e40d2329107", 00:08:42.268 "is_configured": true, 00:08:42.268 "data_offset": 0, 00:08:42.268 "data_size": 65536 00:08:42.268 } 00:08:42.268 ] 00:08:42.268 }' 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.268 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.835 16:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.835 [2024-10-08 16:16:35.969482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.835 [2024-10-08 16:16:35.969819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60883 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60883 ']' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60883 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60883 00:08:42.835 killing process with pid 60883 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60883' 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60883 00:08:42.835 [2024-10-08 16:16:36.142457] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.835 16:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60883 00:08:43.093 [2024-10-08 16:16:36.157977] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.473 ************************************ 00:08:44.473 END TEST raid_state_function_test 00:08:44.473 ************************************ 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.473 00:08:44.473 real 0m5.739s 00:08:44.473 user 0m8.404s 00:08:44.473 sys 0m0.816s 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.473 16:16:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:44.473 16:16:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:44.473 16:16:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.473 16:16:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.473 ************************************ 00:08:44.473 START TEST raid_state_function_test_sb 00:08:44.473 ************************************ 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:44.473 Process raid pid: 61142 00:08:44.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61142 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61142' 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61142 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61142 ']' 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.473 16:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.473 [2024-10-08 16:16:37.677868] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:44.473 [2024-10-08 16:16:37.678109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.731 [2024-10-08 16:16:37.852990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.989 [2024-10-08 16:16:38.130590] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.253 [2024-10-08 16:16:38.374181] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.253 [2024-10-08 16:16:38.374483] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.514 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.514 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:45.514 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.515 [2024-10-08 16:16:38.601669] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.515 [2024-10-08 16:16:38.601776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.515 [2024-10-08 16:16:38.601793] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.515 [2024-10-08 16:16:38.601813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.515 "name": "Existed_Raid", 00:08:45.515 "uuid": "792905f3-5477-4b16-b44e-05db20e01cc7", 00:08:45.515 "strip_size_kb": 64, 00:08:45.515 "state": "configuring", 00:08:45.515 "raid_level": "raid0", 00:08:45.515 "superblock": true, 00:08:45.515 "num_base_bdevs": 2, 00:08:45.515 "num_base_bdevs_discovered": 0, 00:08:45.515 "num_base_bdevs_operational": 2, 00:08:45.515 "base_bdevs_list": [ 00:08:45.515 { 00:08:45.515 "name": "BaseBdev1", 00:08:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.515 "is_configured": false, 00:08:45.515 "data_offset": 0, 00:08:45.515 "data_size": 0 00:08:45.515 }, 00:08:45.515 { 00:08:45.515 "name": "BaseBdev2", 00:08:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.515 "is_configured": false, 00:08:45.515 "data_offset": 0, 00:08:45.515 "data_size": 0 00:08:45.515 } 00:08:45.515 ] 00:08:45.515 }' 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.515 16:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.082 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.082 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.083 [2024-10-08 16:16:39.133652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.083 [2024-10-08 16:16:39.133736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.083 [2024-10-08 16:16:39.145629] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.083 [2024-10-08 16:16:39.145702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.083 [2024-10-08 16:16:39.145719] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.083 [2024-10-08 16:16:39.145741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.083 [2024-10-08 16:16:39.207829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.083 BaseBdev1 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.083 [ 00:08:46.083 { 00:08:46.083 "name": "BaseBdev1", 00:08:46.083 "aliases": [ 00:08:46.083 "28dcd564-3471-4b83-af54-a9ea69fc8b42" 00:08:46.083 ], 00:08:46.083 "product_name": "Malloc disk", 00:08:46.083 "block_size": 512, 00:08:46.083 "num_blocks": 65536, 00:08:46.083 "uuid": "28dcd564-3471-4b83-af54-a9ea69fc8b42", 00:08:46.083 "assigned_rate_limits": { 00:08:46.083 "rw_ios_per_sec": 0, 00:08:46.083 "rw_mbytes_per_sec": 0, 00:08:46.083 "r_mbytes_per_sec": 0, 00:08:46.083 "w_mbytes_per_sec": 0 00:08:46.083 }, 00:08:46.083 "claimed": true, 00:08:46.083 "claim_type": "exclusive_write", 00:08:46.083 "zoned": false, 00:08:46.083 "supported_io_types": { 00:08:46.083 "read": true, 00:08:46.083 "write": true, 00:08:46.083 "unmap": true, 00:08:46.083 "flush": true, 00:08:46.083 "reset": true, 00:08:46.083 "nvme_admin": false, 00:08:46.083 "nvme_io": false, 00:08:46.083 "nvme_io_md": false, 00:08:46.083 "write_zeroes": true, 00:08:46.083 "zcopy": true, 00:08:46.083 "get_zone_info": false, 00:08:46.083 "zone_management": false, 00:08:46.083 "zone_append": false, 00:08:46.083 "compare": false, 00:08:46.083 "compare_and_write": false, 00:08:46.083 "abort": true, 00:08:46.083 "seek_hole": false, 00:08:46.083 "seek_data": false, 00:08:46.083 "copy": true, 00:08:46.083 "nvme_iov_md": false 00:08:46.083 }, 00:08:46.083 "memory_domains": [ 00:08:46.083 { 00:08:46.083 "dma_device_id": "system", 00:08:46.083 "dma_device_type": 1 00:08:46.083 }, 00:08:46.083 { 00:08:46.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.083 "dma_device_type": 2 00:08:46.083 } 00:08:46.083 ], 00:08:46.083 "driver_specific": {} 00:08:46.083 } 00:08:46.083 ] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.083 "name": "Existed_Raid", 00:08:46.083 "uuid": "4fdee7ae-dad9-4ef7-bff9-14f673d2c511", 00:08:46.083 "strip_size_kb": 64, 00:08:46.083 "state": "configuring", 00:08:46.083 "raid_level": "raid0", 00:08:46.083 "superblock": true, 00:08:46.083 "num_base_bdevs": 2, 00:08:46.083 "num_base_bdevs_discovered": 1, 00:08:46.083 "num_base_bdevs_operational": 2, 00:08:46.083 "base_bdevs_list": [ 00:08:46.083 { 00:08:46.083 "name": "BaseBdev1", 00:08:46.083 "uuid": "28dcd564-3471-4b83-af54-a9ea69fc8b42", 00:08:46.083 "is_configured": true, 00:08:46.083 "data_offset": 2048, 00:08:46.083 "data_size": 63488 00:08:46.083 }, 00:08:46.083 { 00:08:46.083 "name": "BaseBdev2", 00:08:46.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.083 "is_configured": false, 00:08:46.083 "data_offset": 0, 00:08:46.083 "data_size": 0 00:08:46.083 } 00:08:46.083 ] 00:08:46.083 }' 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.083 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.651 [2024-10-08 16:16:39.756373] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.651 [2024-10-08 16:16:39.756475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.651 [2024-10-08 16:16:39.764355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.651 [2024-10-08 16:16:39.766966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.651 [2024-10-08 16:16:39.767026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.651 "name": "Existed_Raid", 00:08:46.651 "uuid": "42032a90-984c-43b3-8968-d1ef3634f9b2", 00:08:46.651 "strip_size_kb": 64, 00:08:46.651 "state": "configuring", 00:08:46.651 "raid_level": "raid0", 00:08:46.651 "superblock": true, 00:08:46.651 "num_base_bdevs": 2, 00:08:46.651 "num_base_bdevs_discovered": 1, 00:08:46.651 "num_base_bdevs_operational": 2, 00:08:46.651 "base_bdevs_list": [ 00:08:46.651 { 00:08:46.651 "name": "BaseBdev1", 00:08:46.651 "uuid": "28dcd564-3471-4b83-af54-a9ea69fc8b42", 00:08:46.651 "is_configured": true, 00:08:46.651 "data_offset": 2048, 00:08:46.651 "data_size": 63488 00:08:46.651 }, 00:08:46.651 { 00:08:46.651 "name": "BaseBdev2", 00:08:46.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.651 "is_configured": false, 00:08:46.651 "data_offset": 0, 00:08:46.651 "data_size": 0 00:08:46.651 } 00:08:46.651 ] 00:08:46.651 }' 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.651 16:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.218 [2024-10-08 16:16:40.299274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.218 [2024-10-08 16:16:40.299678] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.218 [2024-10-08 16:16:40.299700] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:47.218 [2024-10-08 16:16:40.300054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:47.218 [2024-10-08 16:16:40.300263] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.218 [2024-10-08 16:16:40.300289] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.218 BaseBdev2 00:08:47.218 [2024-10-08 16:16:40.300479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.218 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.218 [ 00:08:47.218 { 00:08:47.218 "name": "BaseBdev2", 00:08:47.218 "aliases": [ 00:08:47.218 "5f64bac1-7f73-4bab-a76c-f420d855f461" 00:08:47.218 ], 00:08:47.219 "product_name": "Malloc disk", 00:08:47.219 "block_size": 512, 00:08:47.219 "num_blocks": 65536, 00:08:47.219 "uuid": "5f64bac1-7f73-4bab-a76c-f420d855f461", 00:08:47.219 "assigned_rate_limits": { 00:08:47.219 "rw_ios_per_sec": 0, 00:08:47.219 "rw_mbytes_per_sec": 0, 00:08:47.219 "r_mbytes_per_sec": 0, 00:08:47.219 "w_mbytes_per_sec": 0 00:08:47.219 }, 00:08:47.219 "claimed": true, 00:08:47.219 "claim_type": "exclusive_write", 00:08:47.219 "zoned": false, 00:08:47.219 "supported_io_types": { 00:08:47.219 "read": true, 00:08:47.219 "write": true, 00:08:47.219 "unmap": true, 00:08:47.219 "flush": true, 00:08:47.219 "reset": true, 00:08:47.219 "nvme_admin": false, 00:08:47.219 "nvme_io": false, 00:08:47.219 "nvme_io_md": false, 00:08:47.219 "write_zeroes": true, 00:08:47.219 "zcopy": true, 00:08:47.219 "get_zone_info": false, 00:08:47.219 "zone_management": false, 00:08:47.219 "zone_append": false, 00:08:47.219 "compare": false, 00:08:47.219 "compare_and_write": false, 00:08:47.219 "abort": true, 00:08:47.219 "seek_hole": false, 00:08:47.219 "seek_data": false, 00:08:47.219 "copy": true, 00:08:47.219 "nvme_iov_md": false 00:08:47.219 }, 00:08:47.219 "memory_domains": [ 00:08:47.219 { 00:08:47.219 "dma_device_id": "system", 00:08:47.219 "dma_device_type": 1 00:08:47.219 }, 00:08:47.219 { 00:08:47.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.219 "dma_device_type": 2 00:08:47.219 } 00:08:47.219 ], 00:08:47.219 "driver_specific": {} 00:08:47.219 } 00:08:47.219 ] 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.219 "name": "Existed_Raid", 00:08:47.219 "uuid": "42032a90-984c-43b3-8968-d1ef3634f9b2", 00:08:47.219 "strip_size_kb": 64, 00:08:47.219 "state": "online", 00:08:47.219 "raid_level": "raid0", 00:08:47.219 "superblock": true, 00:08:47.219 "num_base_bdevs": 2, 00:08:47.219 "num_base_bdevs_discovered": 2, 00:08:47.219 "num_base_bdevs_operational": 2, 00:08:47.219 "base_bdevs_list": [ 00:08:47.219 { 00:08:47.219 "name": "BaseBdev1", 00:08:47.219 "uuid": "28dcd564-3471-4b83-af54-a9ea69fc8b42", 00:08:47.219 "is_configured": true, 00:08:47.219 "data_offset": 2048, 00:08:47.219 "data_size": 63488 00:08:47.219 }, 00:08:47.219 { 00:08:47.219 "name": "BaseBdev2", 00:08:47.219 "uuid": "5f64bac1-7f73-4bab-a76c-f420d855f461", 00:08:47.219 "is_configured": true, 00:08:47.219 "data_offset": 2048, 00:08:47.219 "data_size": 63488 00:08:47.219 } 00:08:47.219 ] 00:08:47.219 }' 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.219 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.786 [2024-10-08 16:16:40.847907] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.786 "name": "Existed_Raid", 00:08:47.786 "aliases": [ 00:08:47.786 "42032a90-984c-43b3-8968-d1ef3634f9b2" 00:08:47.786 ], 00:08:47.786 "product_name": "Raid Volume", 00:08:47.786 "block_size": 512, 00:08:47.786 "num_blocks": 126976, 00:08:47.786 "uuid": "42032a90-984c-43b3-8968-d1ef3634f9b2", 00:08:47.786 "assigned_rate_limits": { 00:08:47.786 "rw_ios_per_sec": 0, 00:08:47.786 "rw_mbytes_per_sec": 0, 00:08:47.786 "r_mbytes_per_sec": 0, 00:08:47.786 "w_mbytes_per_sec": 0 00:08:47.786 }, 00:08:47.786 "claimed": false, 00:08:47.786 "zoned": false, 00:08:47.786 "supported_io_types": { 00:08:47.786 "read": true, 00:08:47.786 "write": true, 00:08:47.786 "unmap": true, 00:08:47.786 "flush": true, 00:08:47.786 "reset": true, 00:08:47.786 "nvme_admin": false, 00:08:47.786 "nvme_io": false, 00:08:47.786 "nvme_io_md": false, 00:08:47.786 "write_zeroes": true, 00:08:47.786 "zcopy": false, 00:08:47.786 "get_zone_info": false, 00:08:47.786 "zone_management": false, 00:08:47.786 "zone_append": false, 00:08:47.786 "compare": false, 00:08:47.786 "compare_and_write": false, 00:08:47.786 "abort": false, 00:08:47.786 "seek_hole": false, 00:08:47.786 "seek_data": false, 00:08:47.786 "copy": false, 00:08:47.786 "nvme_iov_md": false 00:08:47.786 }, 00:08:47.786 "memory_domains": [ 00:08:47.786 { 00:08:47.786 "dma_device_id": "system", 00:08:47.786 "dma_device_type": 1 00:08:47.786 }, 00:08:47.786 { 00:08:47.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.786 "dma_device_type": 2 00:08:47.786 }, 00:08:47.786 { 00:08:47.786 "dma_device_id": "system", 00:08:47.786 "dma_device_type": 1 00:08:47.786 }, 00:08:47.786 { 00:08:47.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.786 "dma_device_type": 2 00:08:47.786 } 00:08:47.786 ], 00:08:47.786 "driver_specific": { 00:08:47.786 "raid": { 00:08:47.786 "uuid": "42032a90-984c-43b3-8968-d1ef3634f9b2", 00:08:47.786 "strip_size_kb": 64, 00:08:47.786 "state": "online", 00:08:47.786 "raid_level": "raid0", 00:08:47.786 "superblock": true, 00:08:47.786 "num_base_bdevs": 2, 00:08:47.786 "num_base_bdevs_discovered": 2, 00:08:47.786 "num_base_bdevs_operational": 2, 00:08:47.786 "base_bdevs_list": [ 00:08:47.786 { 00:08:47.786 "name": "BaseBdev1", 00:08:47.786 "uuid": "28dcd564-3471-4b83-af54-a9ea69fc8b42", 00:08:47.786 "is_configured": true, 00:08:47.786 "data_offset": 2048, 00:08:47.786 "data_size": 63488 00:08:47.786 }, 00:08:47.786 { 00:08:47.786 "name": "BaseBdev2", 00:08:47.786 "uuid": "5f64bac1-7f73-4bab-a76c-f420d855f461", 00:08:47.786 "is_configured": true, 00:08:47.786 "data_offset": 2048, 00:08:47.786 "data_size": 63488 00:08:47.786 } 00:08:47.786 ] 00:08:47.786 } 00:08:47.786 } 00:08:47.786 }' 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.786 BaseBdev2' 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.786 16:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.787 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.045 [2024-10-08 16:16:41.107722] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.045 [2024-10-08 16:16:41.107792] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.045 [2024-10-08 16:16:41.107882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.045 "name": "Existed_Raid", 00:08:48.045 "uuid": "42032a90-984c-43b3-8968-d1ef3634f9b2", 00:08:48.045 "strip_size_kb": 64, 00:08:48.045 "state": "offline", 00:08:48.045 "raid_level": "raid0", 00:08:48.045 "superblock": true, 00:08:48.045 "num_base_bdevs": 2, 00:08:48.045 "num_base_bdevs_discovered": 1, 00:08:48.045 "num_base_bdevs_operational": 1, 00:08:48.045 "base_bdevs_list": [ 00:08:48.045 { 00:08:48.045 "name": null, 00:08:48.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.045 "is_configured": false, 00:08:48.045 "data_offset": 0, 00:08:48.045 "data_size": 63488 00:08:48.045 }, 00:08:48.045 { 00:08:48.045 "name": "BaseBdev2", 00:08:48.045 "uuid": "5f64bac1-7f73-4bab-a76c-f420d855f461", 00:08:48.045 "is_configured": true, 00:08:48.045 "data_offset": 2048, 00:08:48.045 "data_size": 63488 00:08:48.045 } 00:08:48.045 ] 00:08:48.045 }' 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.045 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.611 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 [2024-10-08 16:16:41.759841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.612 [2024-10-08 16:16:41.759941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61142 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61142 ']' 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61142 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.612 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61142 00:08:48.870 killing process with pid 61142 00:08:48.870 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.870 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.870 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61142' 00:08:48.870 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61142 00:08:48.870 16:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61142 00:08:48.870 [2024-10-08 16:16:41.945948] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.870 [2024-10-08 16:16:41.961699] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.245 16:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:50.245 00:08:50.245 real 0m5.744s 00:08:50.245 user 0m8.409s 00:08:50.245 sys 0m0.814s 00:08:50.245 16:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.245 16:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.245 ************************************ 00:08:50.245 END TEST raid_state_function_test_sb 00:08:50.245 ************************************ 00:08:50.245 16:16:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:50.245 16:16:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:50.245 16:16:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.245 16:16:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.245 ************************************ 00:08:50.245 START TEST raid_superblock_test 00:08:50.245 ************************************ 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61404 00:08:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61404 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61404 ']' 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.245 16:16:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.245 [2024-10-08 16:16:43.453836] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:50.245 [2024-10-08 16:16:43.454025] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 00:08:50.503 [2024-10-08 16:16:43.630010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.762 [2024-10-08 16:16:43.906545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.020 [2024-10-08 16:16:44.128964] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.020 [2024-10-08 16:16:44.129291] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.278 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.279 malloc1 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.279 [2024-10-08 16:16:44.548974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.279 [2024-10-08 16:16:44.549379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.279 [2024-10-08 16:16:44.549427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:51.279 [2024-10-08 16:16:44.549448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.279 [2024-10-08 16:16:44.552644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.279 [2024-10-08 16:16:44.552695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.279 pt1 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.279 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.537 malloc2 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.537 [2024-10-08 16:16:44.616450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.537 [2024-10-08 16:16:44.616796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.537 [2024-10-08 16:16:44.616844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:51.537 [2024-10-08 16:16:44.616861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.537 [2024-10-08 16:16:44.620064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.537 pt2 00:08:51.537 [2024-10-08 16:16:44.620239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.537 [2024-10-08 16:16:44.624616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.537 [2024-10-08 16:16:44.627344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.537 [2024-10-08 16:16:44.627825] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:51.537 [2024-10-08 16:16:44.627875] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:51.537 [2024-10-08 16:16:44.628253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:51.537 [2024-10-08 16:16:44.628489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:51.537 [2024-10-08 16:16:44.628512] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:51.537 [2024-10-08 16:16:44.628825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.537 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.538 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.538 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.538 "name": "raid_bdev1", 00:08:51.538 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:51.538 "strip_size_kb": 64, 00:08:51.538 "state": "online", 00:08:51.538 "raid_level": "raid0", 00:08:51.538 "superblock": true, 00:08:51.538 "num_base_bdevs": 2, 00:08:51.538 "num_base_bdevs_discovered": 2, 00:08:51.538 "num_base_bdevs_operational": 2, 00:08:51.538 "base_bdevs_list": [ 00:08:51.538 { 00:08:51.538 "name": "pt1", 00:08:51.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.538 "is_configured": true, 00:08:51.538 "data_offset": 2048, 00:08:51.538 "data_size": 63488 00:08:51.538 }, 00:08:51.538 { 00:08:51.538 "name": "pt2", 00:08:51.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.538 "is_configured": true, 00:08:51.538 "data_offset": 2048, 00:08:51.538 "data_size": 63488 00:08:51.538 } 00:08:51.538 ] 00:08:51.538 }' 00:08:51.538 16:16:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.538 16:16:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.106 [2024-10-08 16:16:45.161323] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.106 "name": "raid_bdev1", 00:08:52.106 "aliases": [ 00:08:52.106 "fb1e945f-5b61-4f66-aef2-9717163c0a1c" 00:08:52.106 ], 00:08:52.106 "product_name": "Raid Volume", 00:08:52.106 "block_size": 512, 00:08:52.106 "num_blocks": 126976, 00:08:52.106 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:52.106 "assigned_rate_limits": { 00:08:52.106 "rw_ios_per_sec": 0, 00:08:52.106 "rw_mbytes_per_sec": 0, 00:08:52.106 "r_mbytes_per_sec": 0, 00:08:52.106 "w_mbytes_per_sec": 0 00:08:52.106 }, 00:08:52.106 "claimed": false, 00:08:52.106 "zoned": false, 00:08:52.106 "supported_io_types": { 00:08:52.106 "read": true, 00:08:52.106 "write": true, 00:08:52.106 "unmap": true, 00:08:52.106 "flush": true, 00:08:52.106 "reset": true, 00:08:52.106 "nvme_admin": false, 00:08:52.106 "nvme_io": false, 00:08:52.106 "nvme_io_md": false, 00:08:52.106 "write_zeroes": true, 00:08:52.106 "zcopy": false, 00:08:52.106 "get_zone_info": false, 00:08:52.106 "zone_management": false, 00:08:52.106 "zone_append": false, 00:08:52.106 "compare": false, 00:08:52.106 "compare_and_write": false, 00:08:52.106 "abort": false, 00:08:52.106 "seek_hole": false, 00:08:52.106 "seek_data": false, 00:08:52.106 "copy": false, 00:08:52.106 "nvme_iov_md": false 00:08:52.106 }, 00:08:52.106 "memory_domains": [ 00:08:52.106 { 00:08:52.106 "dma_device_id": "system", 00:08:52.106 "dma_device_type": 1 00:08:52.106 }, 00:08:52.106 { 00:08:52.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.106 "dma_device_type": 2 00:08:52.106 }, 00:08:52.106 { 00:08:52.106 "dma_device_id": "system", 00:08:52.106 "dma_device_type": 1 00:08:52.106 }, 00:08:52.106 { 00:08:52.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.106 "dma_device_type": 2 00:08:52.106 } 00:08:52.106 ], 00:08:52.106 "driver_specific": { 00:08:52.106 "raid": { 00:08:52.106 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:52.106 "strip_size_kb": 64, 00:08:52.106 "state": "online", 00:08:52.106 "raid_level": "raid0", 00:08:52.106 "superblock": true, 00:08:52.106 "num_base_bdevs": 2, 00:08:52.106 "num_base_bdevs_discovered": 2, 00:08:52.106 "num_base_bdevs_operational": 2, 00:08:52.106 "base_bdevs_list": [ 00:08:52.106 { 00:08:52.106 "name": "pt1", 00:08:52.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.106 "is_configured": true, 00:08:52.106 "data_offset": 2048, 00:08:52.106 "data_size": 63488 00:08:52.106 }, 00:08:52.106 { 00:08:52.106 "name": "pt2", 00:08:52.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.106 "is_configured": true, 00:08:52.106 "data_offset": 2048, 00:08:52.106 "data_size": 63488 00:08:52.106 } 00:08:52.106 ] 00:08:52.106 } 00:08:52.106 } 00:08:52.106 }' 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.106 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.106 pt2' 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.107 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.365 [2024-10-08 16:16:45.465359] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb1e945f-5b61-4f66-aef2-9717163c0a1c 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fb1e945f-5b61-4f66-aef2-9717163c0a1c ']' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.365 [2024-10-08 16:16:45.525050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.365 [2024-10-08 16:16:45.525119] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.365 [2024-10-08 16:16:45.525260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.365 [2024-10-08 16:16:45.525336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.365 [2024-10-08 16:16:45.525370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.365 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.366 [2024-10-08 16:16:45.673120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:52.366 [2024-10-08 16:16:45.675920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:52.366 [2024-10-08 16:16:45.676030] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:52.366 [2024-10-08 16:16:45.676117] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:52.366 [2024-10-08 16:16:45.676144] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.366 [2024-10-08 16:16:45.676162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:52.366 request: 00:08:52.366 { 00:08:52.366 "name": "raid_bdev1", 00:08:52.366 "raid_level": "raid0", 00:08:52.366 "base_bdevs": [ 00:08:52.366 "malloc1", 00:08:52.366 "malloc2" 00:08:52.366 ], 00:08:52.366 "strip_size_kb": 64, 00:08:52.366 "superblock": false, 00:08:52.366 "method": "bdev_raid_create", 00:08:52.366 "req_id": 1 00:08:52.366 } 00:08:52.366 Got JSON-RPC error response 00:08:52.366 response: 00:08:52.366 { 00:08:52.366 "code": -17, 00:08:52.366 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:52.366 } 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.366 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.624 [2024-10-08 16:16:45.737124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:52.624 [2024-10-08 16:16:45.737572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.624 [2024-10-08 16:16:45.737753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:52.624 [2024-10-08 16:16:45.737784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.624 [2024-10-08 16:16:45.741234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.624 [2024-10-08 16:16:45.741399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:52.624 [2024-10-08 16:16:45.741665] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:52.624 [2024-10-08 16:16:45.741861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:52.624 pt1 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.624 "name": "raid_bdev1", 00:08:52.624 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:52.624 "strip_size_kb": 64, 00:08:52.624 "state": "configuring", 00:08:52.624 "raid_level": "raid0", 00:08:52.624 "superblock": true, 00:08:52.624 "num_base_bdevs": 2, 00:08:52.624 "num_base_bdevs_discovered": 1, 00:08:52.624 "num_base_bdevs_operational": 2, 00:08:52.624 "base_bdevs_list": [ 00:08:52.624 { 00:08:52.624 "name": "pt1", 00:08:52.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.624 "is_configured": true, 00:08:52.624 "data_offset": 2048, 00:08:52.624 "data_size": 63488 00:08:52.624 }, 00:08:52.624 { 00:08:52.624 "name": null, 00:08:52.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.624 "is_configured": false, 00:08:52.624 "data_offset": 2048, 00:08:52.624 "data_size": 63488 00:08:52.624 } 00:08:52.624 ] 00:08:52.624 }' 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.624 16:16:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.190 [2024-10-08 16:16:46.257924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.190 [2024-10-08 16:16:46.258077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.190 [2024-10-08 16:16:46.258114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:53.190 [2024-10-08 16:16:46.258133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.190 [2024-10-08 16:16:46.258872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.190 [2024-10-08 16:16:46.258911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.190 [2024-10-08 16:16:46.259031] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.190 [2024-10-08 16:16:46.259071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.190 [2024-10-08 16:16:46.259228] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.190 [2024-10-08 16:16:46.259249] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:53.190 [2024-10-08 16:16:46.259584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:53.190 [2024-10-08 16:16:46.259781] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.190 [2024-10-08 16:16:46.259797] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:53.190 [2024-10-08 16:16:46.259968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.190 pt2 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.190 "name": "raid_bdev1", 00:08:53.190 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:53.190 "strip_size_kb": 64, 00:08:53.190 "state": "online", 00:08:53.190 "raid_level": "raid0", 00:08:53.190 "superblock": true, 00:08:53.190 "num_base_bdevs": 2, 00:08:53.190 "num_base_bdevs_discovered": 2, 00:08:53.190 "num_base_bdevs_operational": 2, 00:08:53.190 "base_bdevs_list": [ 00:08:53.190 { 00:08:53.190 "name": "pt1", 00:08:53.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.190 "is_configured": true, 00:08:53.190 "data_offset": 2048, 00:08:53.190 "data_size": 63488 00:08:53.190 }, 00:08:53.190 { 00:08:53.190 "name": "pt2", 00:08:53.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.190 "is_configured": true, 00:08:53.190 "data_offset": 2048, 00:08:53.190 "data_size": 63488 00:08:53.190 } 00:08:53.190 ] 00:08:53.190 }' 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.190 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 [2024-10-08 16:16:46.806392] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.756 "name": "raid_bdev1", 00:08:53.756 "aliases": [ 00:08:53.756 "fb1e945f-5b61-4f66-aef2-9717163c0a1c" 00:08:53.756 ], 00:08:53.756 "product_name": "Raid Volume", 00:08:53.756 "block_size": 512, 00:08:53.756 "num_blocks": 126976, 00:08:53.756 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:53.756 "assigned_rate_limits": { 00:08:53.756 "rw_ios_per_sec": 0, 00:08:53.756 "rw_mbytes_per_sec": 0, 00:08:53.756 "r_mbytes_per_sec": 0, 00:08:53.756 "w_mbytes_per_sec": 0 00:08:53.756 }, 00:08:53.756 "claimed": false, 00:08:53.756 "zoned": false, 00:08:53.756 "supported_io_types": { 00:08:53.756 "read": true, 00:08:53.756 "write": true, 00:08:53.756 "unmap": true, 00:08:53.756 "flush": true, 00:08:53.756 "reset": true, 00:08:53.756 "nvme_admin": false, 00:08:53.756 "nvme_io": false, 00:08:53.756 "nvme_io_md": false, 00:08:53.756 "write_zeroes": true, 00:08:53.756 "zcopy": false, 00:08:53.756 "get_zone_info": false, 00:08:53.756 "zone_management": false, 00:08:53.756 "zone_append": false, 00:08:53.756 "compare": false, 00:08:53.756 "compare_and_write": false, 00:08:53.756 "abort": false, 00:08:53.756 "seek_hole": false, 00:08:53.756 "seek_data": false, 00:08:53.756 "copy": false, 00:08:53.756 "nvme_iov_md": false 00:08:53.756 }, 00:08:53.756 "memory_domains": [ 00:08:53.756 { 00:08:53.756 "dma_device_id": "system", 00:08:53.756 "dma_device_type": 1 00:08:53.756 }, 00:08:53.756 { 00:08:53.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.756 "dma_device_type": 2 00:08:53.756 }, 00:08:53.756 { 00:08:53.756 "dma_device_id": "system", 00:08:53.756 "dma_device_type": 1 00:08:53.756 }, 00:08:53.756 { 00:08:53.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.756 "dma_device_type": 2 00:08:53.756 } 00:08:53.756 ], 00:08:53.756 "driver_specific": { 00:08:53.756 "raid": { 00:08:53.756 "uuid": "fb1e945f-5b61-4f66-aef2-9717163c0a1c", 00:08:53.756 "strip_size_kb": 64, 00:08:53.756 "state": "online", 00:08:53.756 "raid_level": "raid0", 00:08:53.756 "superblock": true, 00:08:53.756 "num_base_bdevs": 2, 00:08:53.756 "num_base_bdevs_discovered": 2, 00:08:53.756 "num_base_bdevs_operational": 2, 00:08:53.756 "base_bdevs_list": [ 00:08:53.756 { 00:08:53.756 "name": "pt1", 00:08:53.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.756 "is_configured": true, 00:08:53.756 "data_offset": 2048, 00:08:53.756 "data_size": 63488 00:08:53.756 }, 00:08:53.756 { 00:08:53.756 "name": "pt2", 00:08:53.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.756 "is_configured": true, 00:08:53.756 "data_offset": 2048, 00:08:53.756 "data_size": 63488 00:08:53.756 } 00:08:53.756 ] 00:08:53.756 } 00:08:53.756 } 00:08:53.756 }' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.756 pt2' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.756 16:16:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.757 [2024-10-08 16:16:47.058426] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fb1e945f-5b61-4f66-aef2-9717163c0a1c '!=' fb1e945f-5b61-4f66-aef2-9717163c0a1c ']' 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61404 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61404 ']' 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61404 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61404 00:08:54.014 killing process with pid 61404 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61404' 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61404 00:08:54.014 [2024-10-08 16:16:47.138081] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.014 16:16:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61404 00:08:54.014 [2024-10-08 16:16:47.138226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.014 [2024-10-08 16:16:47.138304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.014 [2024-10-08 16:16:47.138325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:54.272 [2024-10-08 16:16:47.341076] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.731 16:16:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:55.731 00:08:55.731 real 0m5.304s 00:08:55.731 user 0m7.675s 00:08:55.731 sys 0m0.769s 00:08:55.731 16:16:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.731 16:16:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.731 ************************************ 00:08:55.731 END TEST raid_superblock_test 00:08:55.731 ************************************ 00:08:55.731 16:16:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:55.731 16:16:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:55.731 16:16:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.731 16:16:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.731 ************************************ 00:08:55.731 START TEST raid_read_error_test 00:08:55.731 ************************************ 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RLYdNofHUi 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61624 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61624 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61624 ']' 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.731 16:16:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.731 [2024-10-08 16:16:48.824774] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:55.731 [2024-10-08 16:16:48.824962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61624 ] 00:08:55.731 [2024-10-08 16:16:48.992454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.989 [2024-10-08 16:16:49.260990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.246 [2024-10-08 16:16:49.483947] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.246 [2024-10-08 16:16:49.484013] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.504 BaseBdev1_malloc 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.504 true 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.504 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.762 [2024-10-08 16:16:49.828348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:56.762 [2024-10-08 16:16:49.828435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.762 [2024-10-08 16:16:49.828463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:56.762 [2024-10-08 16:16:49.828482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.762 [2024-10-08 16:16:49.831614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.762 [2024-10-08 16:16:49.831674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:56.762 BaseBdev1 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.762 BaseBdev2_malloc 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.762 true 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.762 [2024-10-08 16:16:49.898018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:56.762 [2024-10-08 16:16:49.898117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.762 [2024-10-08 16:16:49.898146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:56.762 [2024-10-08 16:16:49.898165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.762 [2024-10-08 16:16:49.901175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.762 [2024-10-08 16:16:49.901222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:56.762 BaseBdev2 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.762 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.762 [2024-10-08 16:16:49.906198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.762 [2024-10-08 16:16:49.908847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.763 [2024-10-08 16:16:49.909118] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.763 [2024-10-08 16:16:49.909142] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:56.763 [2024-10-08 16:16:49.909451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:56.763 [2024-10-08 16:16:49.909706] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.763 [2024-10-08 16:16:49.909733] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:56.763 [2024-10-08 16:16:49.909943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.763 "name": "raid_bdev1", 00:08:56.763 "uuid": "3cdfce25-9d49-4da6-ab7c-6093a11d3ed0", 00:08:56.763 "strip_size_kb": 64, 00:08:56.763 "state": "online", 00:08:56.763 "raid_level": "raid0", 00:08:56.763 "superblock": true, 00:08:56.763 "num_base_bdevs": 2, 00:08:56.763 "num_base_bdevs_discovered": 2, 00:08:56.763 "num_base_bdevs_operational": 2, 00:08:56.763 "base_bdevs_list": [ 00:08:56.763 { 00:08:56.763 "name": "BaseBdev1", 00:08:56.763 "uuid": "e22a270c-928a-5158-980a-18602434d7f5", 00:08:56.763 "is_configured": true, 00:08:56.763 "data_offset": 2048, 00:08:56.763 "data_size": 63488 00:08:56.763 }, 00:08:56.763 { 00:08:56.763 "name": "BaseBdev2", 00:08:56.763 "uuid": "24c353c8-7edf-50f1-a30a-2e75321e7c01", 00:08:56.763 "is_configured": true, 00:08:56.763 "data_offset": 2048, 00:08:56.763 "data_size": 63488 00:08:56.763 } 00:08:56.763 ] 00:08:56.763 }' 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.763 16:16:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.329 16:16:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:57.329 16:16:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:57.329 [2024-10-08 16:16:50.535975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.264 "name": "raid_bdev1", 00:08:58.264 "uuid": "3cdfce25-9d49-4da6-ab7c-6093a11d3ed0", 00:08:58.264 "strip_size_kb": 64, 00:08:58.264 "state": "online", 00:08:58.264 "raid_level": "raid0", 00:08:58.264 "superblock": true, 00:08:58.264 "num_base_bdevs": 2, 00:08:58.264 "num_base_bdevs_discovered": 2, 00:08:58.264 "num_base_bdevs_operational": 2, 00:08:58.264 "base_bdevs_list": [ 00:08:58.264 { 00:08:58.264 "name": "BaseBdev1", 00:08:58.264 "uuid": "e22a270c-928a-5158-980a-18602434d7f5", 00:08:58.264 "is_configured": true, 00:08:58.264 "data_offset": 2048, 00:08:58.264 "data_size": 63488 00:08:58.264 }, 00:08:58.264 { 00:08:58.264 "name": "BaseBdev2", 00:08:58.264 "uuid": "24c353c8-7edf-50f1-a30a-2e75321e7c01", 00:08:58.264 "is_configured": true, 00:08:58.264 "data_offset": 2048, 00:08:58.264 "data_size": 63488 00:08:58.264 } 00:08:58.264 ] 00:08:58.264 }' 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.264 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.830 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.830 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.830 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.830 [2024-10-08 16:16:51.927604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.830 [2024-10-08 16:16:51.927681] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.830 [2024-10-08 16:16:51.931143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.830 [2024-10-08 16:16:51.931219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.831 [2024-10-08 16:16:51.931270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.831 [2024-10-08 16:16:51.931299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61624 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61624 ']' 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61624 00:08:58.831 { 00:08:58.831 "results": [ 00:08:58.831 { 00:08:58.831 "job": "raid_bdev1", 00:08:58.831 "core_mask": "0x1", 00:08:58.831 "workload": "randrw", 00:08:58.831 "percentage": 50, 00:08:58.831 "status": "finished", 00:08:58.831 "queue_depth": 1, 00:08:58.831 "io_size": 131072, 00:08:58.831 "runtime": 1.388912, 00:08:58.831 "iops": 10066.152499222413, 00:08:58.831 "mibps": 1258.2690624028016, 00:08:58.831 "io_failed": 1, 00:08:58.831 "io_timeout": 0, 00:08:58.831 "avg_latency_us": 140.33640134718664, 00:08:58.831 "min_latency_us": 43.985454545454544, 00:08:58.831 "max_latency_us": 1854.370909090909 00:08:58.831 } 00:08:58.831 ], 00:08:58.831 "core_count": 1 00:08:58.831 } 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61624 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61624' 00:08:58.831 killing process with pid 61624 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61624 00:08:58.831 16:16:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61624 00:08:58.831 [2024-10-08 16:16:51.964927] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.831 [2024-10-08 16:16:52.104488] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RLYdNofHUi 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:00.222 00:09:00.222 real 0m4.780s 00:09:00.222 user 0m5.804s 00:09:00.222 sys 0m0.625s 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.222 16:16:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.222 ************************************ 00:09:00.222 END TEST raid_read_error_test 00:09:00.222 ************************************ 00:09:00.222 16:16:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:00.222 16:16:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.222 16:16:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.222 16:16:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.222 ************************************ 00:09:00.222 START TEST raid_write_error_test 00:09:00.222 ************************************ 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:00.222 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VM42y3fPfo 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61770 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61770 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61770 ']' 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.481 16:16:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.481 [2024-10-08 16:16:53.659263] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:00.481 [2024-10-08 16:16:53.659460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61770 ] 00:09:00.739 [2024-10-08 16:16:53.836166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.998 [2024-10-08 16:16:54.166200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.257 [2024-10-08 16:16:54.407880] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.257 [2024-10-08 16:16:54.407982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.516 BaseBdev1_malloc 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.516 true 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.516 [2024-10-08 16:16:54.723627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:01.516 [2024-10-08 16:16:54.723795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.516 [2024-10-08 16:16:54.723847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:01.516 [2024-10-08 16:16:54.723884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.516 [2024-10-08 16:16:54.728354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.516 [2024-10-08 16:16:54.728435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:01.516 BaseBdev1 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:01.516 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.517 BaseBdev2_malloc 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.517 true 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.517 [2024-10-08 16:16:54.807920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:01.517 [2024-10-08 16:16:54.808053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.517 [2024-10-08 16:16:54.808100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:01.517 [2024-10-08 16:16:54.808129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.517 [2024-10-08 16:16:54.811485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.517 [2024-10-08 16:16:54.811554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:01.517 BaseBdev2 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.517 [2024-10-08 16:16:54.815963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.517 [2024-10-08 16:16:54.818775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.517 [2024-10-08 16:16:54.819031] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:01.517 [2024-10-08 16:16:54.819063] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:01.517 [2024-10-08 16:16:54.819459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:01.517 [2024-10-08 16:16:54.819708] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:01.517 [2024-10-08 16:16:54.819726] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:01.517 [2024-10-08 16:16:54.820001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.517 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.851 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.851 "name": "raid_bdev1", 00:09:01.851 "uuid": "dfd0ed5c-ce5c-415d-b00e-ed26fd70cea2", 00:09:01.851 "strip_size_kb": 64, 00:09:01.851 "state": "online", 00:09:01.851 "raid_level": "raid0", 00:09:01.851 "superblock": true, 00:09:01.851 "num_base_bdevs": 2, 00:09:01.851 "num_base_bdevs_discovered": 2, 00:09:01.851 "num_base_bdevs_operational": 2, 00:09:01.851 "base_bdevs_list": [ 00:09:01.851 { 00:09:01.851 "name": "BaseBdev1", 00:09:01.851 "uuid": "2de3698f-5f21-5ec6-abe9-122164bc9e34", 00:09:01.851 "is_configured": true, 00:09:01.851 "data_offset": 2048, 00:09:01.851 "data_size": 63488 00:09:01.851 }, 00:09:01.851 { 00:09:01.851 "name": "BaseBdev2", 00:09:01.851 "uuid": "d2dd584d-60fd-5a3a-8266-4f3aa796eed8", 00:09:01.851 "is_configured": true, 00:09:01.851 "data_offset": 2048, 00:09:01.851 "data_size": 63488 00:09:01.851 } 00:09:01.851 ] 00:09:01.851 }' 00:09:01.851 16:16:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.851 16:16:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.110 16:16:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.110 16:16:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.369 [2024-10-08 16:16:55.457776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.303 "name": "raid_bdev1", 00:09:03.303 "uuid": "dfd0ed5c-ce5c-415d-b00e-ed26fd70cea2", 00:09:03.303 "strip_size_kb": 64, 00:09:03.303 "state": "online", 00:09:03.303 "raid_level": "raid0", 00:09:03.303 "superblock": true, 00:09:03.303 "num_base_bdevs": 2, 00:09:03.303 "num_base_bdevs_discovered": 2, 00:09:03.303 "num_base_bdevs_operational": 2, 00:09:03.303 "base_bdevs_list": [ 00:09:03.303 { 00:09:03.303 "name": "BaseBdev1", 00:09:03.303 "uuid": "2de3698f-5f21-5ec6-abe9-122164bc9e34", 00:09:03.303 "is_configured": true, 00:09:03.303 "data_offset": 2048, 00:09:03.303 "data_size": 63488 00:09:03.303 }, 00:09:03.303 { 00:09:03.303 "name": "BaseBdev2", 00:09:03.303 "uuid": "d2dd584d-60fd-5a3a-8266-4f3aa796eed8", 00:09:03.303 "is_configured": true, 00:09:03.303 "data_offset": 2048, 00:09:03.303 "data_size": 63488 00:09:03.303 } 00:09:03.303 ] 00:09:03.303 }' 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.303 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.868 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.869 [2024-10-08 16:16:56.924382] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.869 [2024-10-08 16:16:56.924764] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.869 [2024-10-08 16:16:56.928302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.869 [2024-10-08 16:16:56.928598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.869 [2024-10-08 16:16:56.928799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:09:03.869 "results": [ 00:09:03.869 { 00:09:03.869 "job": "raid_bdev1", 00:09:03.869 "core_mask": "0x1", 00:09:03.869 "workload": "randrw", 00:09:03.869 "percentage": 50, 00:09:03.869 "status": "finished", 00:09:03.869 "queue_depth": 1, 00:09:03.869 "io_size": 131072, 00:09:03.869 "runtime": 1.464336, 00:09:03.869 "iops": 9880.9289671223, 00:09:03.869 "mibps": 1235.1161208902874, 00:09:03.869 "io_failed": 1, 00:09:03.869 "io_timeout": 0, 00:09:03.869 "avg_latency_us": 142.62060438524847, 00:09:03.869 "min_latency_us": 42.589090909090906, 00:09:03.869 "max_latency_us": 1861.8181818181818 00:09:03.869 } 00:09:03.869 ], 00:09:03.869 "core_count": 1 00:09:03.869 } 00:09:03.869 ee all in destruct 00:09:03.869 [2024-10-08 16:16:56.928965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61770 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61770 ']' 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61770 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61770 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61770' 00:09:03.869 killing process with pid 61770 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61770 00:09:03.869 16:16:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61770 00:09:03.869 [2024-10-08 16:16:56.968581] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.869 [2024-10-08 16:16:57.102206] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VM42y3fPfo 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:05.241 ************************************ 00:09:05.241 END TEST raid_write_error_test 00:09:05.241 ************************************ 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:09:05.241 00:09:05.241 real 0m4.967s 00:09:05.241 user 0m6.046s 00:09:05.241 sys 0m0.659s 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.241 16:16:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.241 16:16:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:05.241 16:16:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:05.241 16:16:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:05.241 16:16:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.241 16:16:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.241 ************************************ 00:09:05.241 START TEST raid_state_function_test 00:09:05.241 ************************************ 00:09:05.241 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:09:05.241 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:05.241 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:05.241 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:05.241 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.502 Process raid pid: 61919 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61919 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61919' 00:09:05.502 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61919 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61919 ']' 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.503 16:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.503 [2024-10-08 16:16:58.711817] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:05.503 [2024-10-08 16:16:58.712298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.760 [2024-10-08 16:16:58.893019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.018 [2024-10-08 16:16:59.176741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.276 [2024-10-08 16:16:59.411210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.276 [2024-10-08 16:16:59.411458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.534 [2024-10-08 16:16:59.725975] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.534 [2024-10-08 16:16:59.726311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.534 [2024-10-08 16:16:59.726339] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.534 [2024-10-08 16:16:59.726369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.534 "name": "Existed_Raid", 00:09:06.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.534 "strip_size_kb": 64, 00:09:06.534 "state": "configuring", 00:09:06.534 "raid_level": "concat", 00:09:06.534 "superblock": false, 00:09:06.534 "num_base_bdevs": 2, 00:09:06.534 "num_base_bdevs_discovered": 0, 00:09:06.534 "num_base_bdevs_operational": 2, 00:09:06.534 "base_bdevs_list": [ 00:09:06.534 { 00:09:06.534 "name": "BaseBdev1", 00:09:06.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.534 "is_configured": false, 00:09:06.534 "data_offset": 0, 00:09:06.534 "data_size": 0 00:09:06.534 }, 00:09:06.534 { 00:09:06.534 "name": "BaseBdev2", 00:09:06.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.534 "is_configured": false, 00:09:06.534 "data_offset": 0, 00:09:06.534 "data_size": 0 00:09:06.534 } 00:09:06.534 ] 00:09:06.534 }' 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.534 16:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.100 [2024-10-08 16:17:00.242072] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.100 [2024-10-08 16:17:00.242345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.100 [2024-10-08 16:17:00.254099] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.100 [2024-10-08 16:17:00.254314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.100 [2024-10-08 16:17:00.254436] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.100 [2024-10-08 16:17:00.254612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.100 [2024-10-08 16:17:00.320305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.100 BaseBdev1 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.100 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.100 [ 00:09:07.100 { 00:09:07.100 "name": "BaseBdev1", 00:09:07.100 "aliases": [ 00:09:07.100 "0c1beeac-3261-4ccb-869c-556a643c3783" 00:09:07.100 ], 00:09:07.100 "product_name": "Malloc disk", 00:09:07.100 "block_size": 512, 00:09:07.100 "num_blocks": 65536, 00:09:07.100 "uuid": "0c1beeac-3261-4ccb-869c-556a643c3783", 00:09:07.100 "assigned_rate_limits": { 00:09:07.100 "rw_ios_per_sec": 0, 00:09:07.100 "rw_mbytes_per_sec": 0, 00:09:07.100 "r_mbytes_per_sec": 0, 00:09:07.100 "w_mbytes_per_sec": 0 00:09:07.100 }, 00:09:07.100 "claimed": true, 00:09:07.100 "claim_type": "exclusive_write", 00:09:07.100 "zoned": false, 00:09:07.100 "supported_io_types": { 00:09:07.100 "read": true, 00:09:07.100 "write": true, 00:09:07.100 "unmap": true, 00:09:07.100 "flush": true, 00:09:07.100 "reset": true, 00:09:07.100 "nvme_admin": false, 00:09:07.100 "nvme_io": false, 00:09:07.100 "nvme_io_md": false, 00:09:07.100 "write_zeroes": true, 00:09:07.100 "zcopy": true, 00:09:07.100 "get_zone_info": false, 00:09:07.100 "zone_management": false, 00:09:07.100 "zone_append": false, 00:09:07.100 "compare": false, 00:09:07.101 "compare_and_write": false, 00:09:07.101 "abort": true, 00:09:07.101 "seek_hole": false, 00:09:07.101 "seek_data": false, 00:09:07.101 "copy": true, 00:09:07.101 "nvme_iov_md": false 00:09:07.101 }, 00:09:07.101 "memory_domains": [ 00:09:07.101 { 00:09:07.101 "dma_device_id": "system", 00:09:07.101 "dma_device_type": 1 00:09:07.101 }, 00:09:07.101 { 00:09:07.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.101 "dma_device_type": 2 00:09:07.101 } 00:09:07.101 ], 00:09:07.101 "driver_specific": {} 00:09:07.101 } 00:09:07.101 ] 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.101 "name": "Existed_Raid", 00:09:07.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.101 "strip_size_kb": 64, 00:09:07.101 "state": "configuring", 00:09:07.101 "raid_level": "concat", 00:09:07.101 "superblock": false, 00:09:07.101 "num_base_bdevs": 2, 00:09:07.101 "num_base_bdevs_discovered": 1, 00:09:07.101 "num_base_bdevs_operational": 2, 00:09:07.101 "base_bdevs_list": [ 00:09:07.101 { 00:09:07.101 "name": "BaseBdev1", 00:09:07.101 "uuid": "0c1beeac-3261-4ccb-869c-556a643c3783", 00:09:07.101 "is_configured": true, 00:09:07.101 "data_offset": 0, 00:09:07.101 "data_size": 65536 00:09:07.101 }, 00:09:07.101 { 00:09:07.101 "name": "BaseBdev2", 00:09:07.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.101 "is_configured": false, 00:09:07.101 "data_offset": 0, 00:09:07.101 "data_size": 0 00:09:07.101 } 00:09:07.101 ] 00:09:07.101 }' 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.101 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.669 [2024-10-08 16:17:00.868592] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.669 [2024-10-08 16:17:00.868699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.669 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.669 [2024-10-08 16:17:00.876509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.669 [2024-10-08 16:17:00.879237] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.670 [2024-10-08 16:17:00.879302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.670 "name": "Existed_Raid", 00:09:07.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.670 "strip_size_kb": 64, 00:09:07.670 "state": "configuring", 00:09:07.670 "raid_level": "concat", 00:09:07.670 "superblock": false, 00:09:07.670 "num_base_bdevs": 2, 00:09:07.670 "num_base_bdevs_discovered": 1, 00:09:07.670 "num_base_bdevs_operational": 2, 00:09:07.670 "base_bdevs_list": [ 00:09:07.670 { 00:09:07.670 "name": "BaseBdev1", 00:09:07.670 "uuid": "0c1beeac-3261-4ccb-869c-556a643c3783", 00:09:07.670 "is_configured": true, 00:09:07.670 "data_offset": 0, 00:09:07.670 "data_size": 65536 00:09:07.670 }, 00:09:07.670 { 00:09:07.670 "name": "BaseBdev2", 00:09:07.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.670 "is_configured": false, 00:09:07.670 "data_offset": 0, 00:09:07.670 "data_size": 0 00:09:07.670 } 00:09:07.670 ] 00:09:07.670 }' 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.670 16:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.235 [2024-10-08 16:17:01.442877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.235 [2024-10-08 16:17:01.442950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.235 [2024-10-08 16:17:01.442965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:08.235 [2024-10-08 16:17:01.443318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:08.235 [2024-10-08 16:17:01.443579] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.235 [2024-10-08 16:17:01.443602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:08.235 [2024-10-08 16:17:01.443936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.235 BaseBdev2 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.235 [ 00:09:08.235 { 00:09:08.235 "name": "BaseBdev2", 00:09:08.235 "aliases": [ 00:09:08.235 "c4a0a2f5-07eb-4c10-92b7-b43e02194d34" 00:09:08.235 ], 00:09:08.235 "product_name": "Malloc disk", 00:09:08.235 "block_size": 512, 00:09:08.235 "num_blocks": 65536, 00:09:08.235 "uuid": "c4a0a2f5-07eb-4c10-92b7-b43e02194d34", 00:09:08.235 "assigned_rate_limits": { 00:09:08.235 "rw_ios_per_sec": 0, 00:09:08.235 "rw_mbytes_per_sec": 0, 00:09:08.235 "r_mbytes_per_sec": 0, 00:09:08.235 "w_mbytes_per_sec": 0 00:09:08.235 }, 00:09:08.235 "claimed": true, 00:09:08.235 "claim_type": "exclusive_write", 00:09:08.235 "zoned": false, 00:09:08.235 "supported_io_types": { 00:09:08.235 "read": true, 00:09:08.235 "write": true, 00:09:08.235 "unmap": true, 00:09:08.235 "flush": true, 00:09:08.235 "reset": true, 00:09:08.235 "nvme_admin": false, 00:09:08.235 "nvme_io": false, 00:09:08.235 "nvme_io_md": false, 00:09:08.235 "write_zeroes": true, 00:09:08.235 "zcopy": true, 00:09:08.235 "get_zone_info": false, 00:09:08.235 "zone_management": false, 00:09:08.235 "zone_append": false, 00:09:08.235 "compare": false, 00:09:08.235 "compare_and_write": false, 00:09:08.235 "abort": true, 00:09:08.235 "seek_hole": false, 00:09:08.235 "seek_data": false, 00:09:08.235 "copy": true, 00:09:08.235 "nvme_iov_md": false 00:09:08.235 }, 00:09:08.235 "memory_domains": [ 00:09:08.235 { 00:09:08.235 "dma_device_id": "system", 00:09:08.235 "dma_device_type": 1 00:09:08.235 }, 00:09:08.235 { 00:09:08.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.235 "dma_device_type": 2 00:09:08.235 } 00:09:08.235 ], 00:09:08.235 "driver_specific": {} 00:09:08.235 } 00:09:08.235 ] 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.235 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.236 "name": "Existed_Raid", 00:09:08.236 "uuid": "c3373779-6dd4-408f-b6f9-c86bf4769471", 00:09:08.236 "strip_size_kb": 64, 00:09:08.236 "state": "online", 00:09:08.236 "raid_level": "concat", 00:09:08.236 "superblock": false, 00:09:08.236 "num_base_bdevs": 2, 00:09:08.236 "num_base_bdevs_discovered": 2, 00:09:08.236 "num_base_bdevs_operational": 2, 00:09:08.236 "base_bdevs_list": [ 00:09:08.236 { 00:09:08.236 "name": "BaseBdev1", 00:09:08.236 "uuid": "0c1beeac-3261-4ccb-869c-556a643c3783", 00:09:08.236 "is_configured": true, 00:09:08.236 "data_offset": 0, 00:09:08.236 "data_size": 65536 00:09:08.236 }, 00:09:08.236 { 00:09:08.236 "name": "BaseBdev2", 00:09:08.236 "uuid": "c4a0a2f5-07eb-4c10-92b7-b43e02194d34", 00:09:08.236 "is_configured": true, 00:09:08.236 "data_offset": 0, 00:09:08.236 "data_size": 65536 00:09:08.236 } 00:09:08.236 ] 00:09:08.236 }' 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.236 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.801 16:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.801 [2024-10-08 16:17:01.975477] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.801 "name": "Existed_Raid", 00:09:08.801 "aliases": [ 00:09:08.801 "c3373779-6dd4-408f-b6f9-c86bf4769471" 00:09:08.801 ], 00:09:08.801 "product_name": "Raid Volume", 00:09:08.801 "block_size": 512, 00:09:08.801 "num_blocks": 131072, 00:09:08.801 "uuid": "c3373779-6dd4-408f-b6f9-c86bf4769471", 00:09:08.801 "assigned_rate_limits": { 00:09:08.801 "rw_ios_per_sec": 0, 00:09:08.801 "rw_mbytes_per_sec": 0, 00:09:08.801 "r_mbytes_per_sec": 0, 00:09:08.801 "w_mbytes_per_sec": 0 00:09:08.801 }, 00:09:08.801 "claimed": false, 00:09:08.801 "zoned": false, 00:09:08.801 "supported_io_types": { 00:09:08.801 "read": true, 00:09:08.801 "write": true, 00:09:08.801 "unmap": true, 00:09:08.801 "flush": true, 00:09:08.801 "reset": true, 00:09:08.801 "nvme_admin": false, 00:09:08.801 "nvme_io": false, 00:09:08.801 "nvme_io_md": false, 00:09:08.801 "write_zeroes": true, 00:09:08.801 "zcopy": false, 00:09:08.801 "get_zone_info": false, 00:09:08.801 "zone_management": false, 00:09:08.801 "zone_append": false, 00:09:08.801 "compare": false, 00:09:08.801 "compare_and_write": false, 00:09:08.801 "abort": false, 00:09:08.801 "seek_hole": false, 00:09:08.801 "seek_data": false, 00:09:08.801 "copy": false, 00:09:08.801 "nvme_iov_md": false 00:09:08.801 }, 00:09:08.801 "memory_domains": [ 00:09:08.801 { 00:09:08.801 "dma_device_id": "system", 00:09:08.801 "dma_device_type": 1 00:09:08.801 }, 00:09:08.801 { 00:09:08.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.801 "dma_device_type": 2 00:09:08.801 }, 00:09:08.801 { 00:09:08.801 "dma_device_id": "system", 00:09:08.801 "dma_device_type": 1 00:09:08.801 }, 00:09:08.801 { 00:09:08.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.801 "dma_device_type": 2 00:09:08.801 } 00:09:08.801 ], 00:09:08.801 "driver_specific": { 00:09:08.801 "raid": { 00:09:08.801 "uuid": "c3373779-6dd4-408f-b6f9-c86bf4769471", 00:09:08.801 "strip_size_kb": 64, 00:09:08.801 "state": "online", 00:09:08.801 "raid_level": "concat", 00:09:08.801 "superblock": false, 00:09:08.801 "num_base_bdevs": 2, 00:09:08.801 "num_base_bdevs_discovered": 2, 00:09:08.801 "num_base_bdevs_operational": 2, 00:09:08.801 "base_bdevs_list": [ 00:09:08.801 { 00:09:08.801 "name": "BaseBdev1", 00:09:08.801 "uuid": "0c1beeac-3261-4ccb-869c-556a643c3783", 00:09:08.801 "is_configured": true, 00:09:08.801 "data_offset": 0, 00:09:08.801 "data_size": 65536 00:09:08.801 }, 00:09:08.801 { 00:09:08.801 "name": "BaseBdev2", 00:09:08.801 "uuid": "c4a0a2f5-07eb-4c10-92b7-b43e02194d34", 00:09:08.801 "is_configured": true, 00:09:08.801 "data_offset": 0, 00:09:08.801 "data_size": 65536 00:09:08.801 } 00:09:08.801 ] 00:09:08.801 } 00:09:08.801 } 00:09:08.801 }' 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.801 BaseBdev2' 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.801 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 [2024-10-08 16:17:02.211178] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.059 [2024-10-08 16:17:02.211240] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.059 [2024-10-08 16:17:02.211319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.059 "name": "Existed_Raid", 00:09:09.059 "uuid": "c3373779-6dd4-408f-b6f9-c86bf4769471", 00:09:09.059 "strip_size_kb": 64, 00:09:09.059 "state": "offline", 00:09:09.059 "raid_level": "concat", 00:09:09.059 "superblock": false, 00:09:09.059 "num_base_bdevs": 2, 00:09:09.059 "num_base_bdevs_discovered": 1, 00:09:09.059 "num_base_bdevs_operational": 1, 00:09:09.059 "base_bdevs_list": [ 00:09:09.059 { 00:09:09.059 "name": null, 00:09:09.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.059 "is_configured": false, 00:09:09.059 "data_offset": 0, 00:09:09.059 "data_size": 65536 00:09:09.059 }, 00:09:09.059 { 00:09:09.059 "name": "BaseBdev2", 00:09:09.059 "uuid": "c4a0a2f5-07eb-4c10-92b7-b43e02194d34", 00:09:09.059 "is_configured": true, 00:09:09.059 "data_offset": 0, 00:09:09.059 "data_size": 65536 00:09:09.059 } 00:09:09.059 ] 00:09:09.059 }' 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.059 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.623 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:09.623 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.623 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.623 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.623 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.624 16:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.624 [2024-10-08 16:17:02.863148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.624 [2024-10-08 16:17:02.863228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61919 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61919 ']' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61919 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61919 00:09:09.882 killing process with pid 61919 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61919' 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61919 00:09:09.882 [2024-10-08 16:17:03.096070] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.882 16:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61919 00:09:09.882 [2024-10-08 16:17:03.111487] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.282 ************************************ 00:09:11.282 END TEST raid_state_function_test 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:11.282 00:09:11.282 real 0m5.868s 00:09:11.282 user 0m8.561s 00:09:11.282 sys 0m0.900s 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.282 ************************************ 00:09:11.282 16:17:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:11.282 16:17:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:11.282 16:17:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.282 16:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.282 ************************************ 00:09:11.282 START TEST raid_state_function_test_sb 00:09:11.282 ************************************ 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.282 Process raid pid: 62172 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62172 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62172' 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62172 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62172 ']' 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.282 16:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.540 [2024-10-08 16:17:04.609589] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:11.540 [2024-10-08 16:17:04.609854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.540 [2024-10-08 16:17:04.784991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.799 [2024-10-08 16:17:05.060393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.057 [2024-10-08 16:17:05.287388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.057 [2024-10-08 16:17:05.287706] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.315 [2024-10-08 16:17:05.559683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.315 [2024-10-08 16:17:05.559986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.315 [2024-10-08 16:17:05.560135] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.315 [2024-10-08 16:17:05.560274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.315 "name": "Existed_Raid", 00:09:12.315 "uuid": "2d0b0190-b9aa-48f0-a627-fc66de778c67", 00:09:12.315 "strip_size_kb": 64, 00:09:12.315 "state": "configuring", 00:09:12.315 "raid_level": "concat", 00:09:12.315 "superblock": true, 00:09:12.315 "num_base_bdevs": 2, 00:09:12.315 "num_base_bdevs_discovered": 0, 00:09:12.315 "num_base_bdevs_operational": 2, 00:09:12.315 "base_bdevs_list": [ 00:09:12.315 { 00:09:12.315 "name": "BaseBdev1", 00:09:12.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.315 "is_configured": false, 00:09:12.315 "data_offset": 0, 00:09:12.315 "data_size": 0 00:09:12.315 }, 00:09:12.315 { 00:09:12.315 "name": "BaseBdev2", 00:09:12.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.315 "is_configured": false, 00:09:12.315 "data_offset": 0, 00:09:12.315 "data_size": 0 00:09:12.315 } 00:09:12.315 ] 00:09:12.315 }' 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.315 16:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 [2024-10-08 16:17:06.075750] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.918 [2024-10-08 16:17:06.075813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 [2024-10-08 16:17:06.087743] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.918 [2024-10-08 16:17:06.087940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.918 [2024-10-08 16:17:06.088090] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.918 [2024-10-08 16:17:06.088157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 [2024-10-08 16:17:06.153255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.918 BaseBdev1 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.918 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 [ 00:09:12.918 { 00:09:12.918 "name": "BaseBdev1", 00:09:12.918 "aliases": [ 00:09:12.918 "f13cdeed-e8b4-46c8-9971-a413efc2b9a9" 00:09:12.918 ], 00:09:12.918 "product_name": "Malloc disk", 00:09:12.918 "block_size": 512, 00:09:12.918 "num_blocks": 65536, 00:09:12.918 "uuid": "f13cdeed-e8b4-46c8-9971-a413efc2b9a9", 00:09:12.918 "assigned_rate_limits": { 00:09:12.918 "rw_ios_per_sec": 0, 00:09:12.918 "rw_mbytes_per_sec": 0, 00:09:12.918 "r_mbytes_per_sec": 0, 00:09:12.918 "w_mbytes_per_sec": 0 00:09:12.918 }, 00:09:12.918 "claimed": true, 00:09:12.918 "claim_type": "exclusive_write", 00:09:12.918 "zoned": false, 00:09:12.918 "supported_io_types": { 00:09:12.918 "read": true, 00:09:12.918 "write": true, 00:09:12.918 "unmap": true, 00:09:12.918 "flush": true, 00:09:12.918 "reset": true, 00:09:12.918 "nvme_admin": false, 00:09:12.918 "nvme_io": false, 00:09:12.918 "nvme_io_md": false, 00:09:12.919 "write_zeroes": true, 00:09:12.919 "zcopy": true, 00:09:12.919 "get_zone_info": false, 00:09:12.919 "zone_management": false, 00:09:12.919 "zone_append": false, 00:09:12.919 "compare": false, 00:09:12.919 "compare_and_write": false, 00:09:12.919 "abort": true, 00:09:12.919 "seek_hole": false, 00:09:12.919 "seek_data": false, 00:09:12.919 "copy": true, 00:09:12.919 "nvme_iov_md": false 00:09:12.919 }, 00:09:12.919 "memory_domains": [ 00:09:12.919 { 00:09:12.919 "dma_device_id": "system", 00:09:12.919 "dma_device_type": 1 00:09:12.919 }, 00:09:12.919 { 00:09:12.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.919 "dma_device_type": 2 00:09:12.919 } 00:09:12.919 ], 00:09:12.919 "driver_specific": {} 00:09:12.919 } 00:09:12.919 ] 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.919 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.192 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.192 "name": "Existed_Raid", 00:09:13.192 "uuid": "f1a2c80d-7e89-4e03-a3b9-3bbd0382acc9", 00:09:13.192 "strip_size_kb": 64, 00:09:13.192 "state": "configuring", 00:09:13.192 "raid_level": "concat", 00:09:13.192 "superblock": true, 00:09:13.192 "num_base_bdevs": 2, 00:09:13.192 "num_base_bdevs_discovered": 1, 00:09:13.192 "num_base_bdevs_operational": 2, 00:09:13.192 "base_bdevs_list": [ 00:09:13.192 { 00:09:13.192 "name": "BaseBdev1", 00:09:13.192 "uuid": "f13cdeed-e8b4-46c8-9971-a413efc2b9a9", 00:09:13.192 "is_configured": true, 00:09:13.192 "data_offset": 2048, 00:09:13.192 "data_size": 63488 00:09:13.192 }, 00:09:13.192 { 00:09:13.192 "name": "BaseBdev2", 00:09:13.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.192 "is_configured": false, 00:09:13.192 "data_offset": 0, 00:09:13.192 "data_size": 0 00:09:13.192 } 00:09:13.192 ] 00:09:13.192 }' 00:09:13.192 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.192 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.451 [2024-10-08 16:17:06.697477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.451 [2024-10-08 16:17:06.697589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.451 [2024-10-08 16:17:06.709508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.451 [2024-10-08 16:17:06.712604] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.451 [2024-10-08 16:17:06.712780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.451 "name": "Existed_Raid", 00:09:13.451 "uuid": "16ea5c30-0220-46ea-b2a6-dfc8e82d13c8", 00:09:13.451 "strip_size_kb": 64, 00:09:13.451 "state": "configuring", 00:09:13.451 "raid_level": "concat", 00:09:13.451 "superblock": true, 00:09:13.451 "num_base_bdevs": 2, 00:09:13.451 "num_base_bdevs_discovered": 1, 00:09:13.451 "num_base_bdevs_operational": 2, 00:09:13.451 "base_bdevs_list": [ 00:09:13.451 { 00:09:13.451 "name": "BaseBdev1", 00:09:13.451 "uuid": "f13cdeed-e8b4-46c8-9971-a413efc2b9a9", 00:09:13.451 "is_configured": true, 00:09:13.451 "data_offset": 2048, 00:09:13.451 "data_size": 63488 00:09:13.451 }, 00:09:13.451 { 00:09:13.451 "name": "BaseBdev2", 00:09:13.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.451 "is_configured": false, 00:09:13.451 "data_offset": 0, 00:09:13.451 "data_size": 0 00:09:13.451 } 00:09:13.451 ] 00:09:13.451 }' 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.451 16:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.019 [2024-10-08 16:17:07.275470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.019 [2024-10-08 16:17:07.275858] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.019 [2024-10-08 16:17:07.275879] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.019 [2024-10-08 16:17:07.276219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:14.019 [2024-10-08 16:17:07.276421] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.019 [2024-10-08 16:17:07.276443] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:14.019 BaseBdev2 00:09:14.019 [2024-10-08 16:17:07.276645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.019 [ 00:09:14.019 { 00:09:14.019 "name": "BaseBdev2", 00:09:14.019 "aliases": [ 00:09:14.019 "9f320f2c-d985-4e51-b386-62e4aaeadd8c" 00:09:14.019 ], 00:09:14.019 "product_name": "Malloc disk", 00:09:14.019 "block_size": 512, 00:09:14.019 "num_blocks": 65536, 00:09:14.019 "uuid": "9f320f2c-d985-4e51-b386-62e4aaeadd8c", 00:09:14.019 "assigned_rate_limits": { 00:09:14.019 "rw_ios_per_sec": 0, 00:09:14.019 "rw_mbytes_per_sec": 0, 00:09:14.019 "r_mbytes_per_sec": 0, 00:09:14.019 "w_mbytes_per_sec": 0 00:09:14.019 }, 00:09:14.019 "claimed": true, 00:09:14.019 "claim_type": "exclusive_write", 00:09:14.019 "zoned": false, 00:09:14.019 "supported_io_types": { 00:09:14.019 "read": true, 00:09:14.019 "write": true, 00:09:14.019 "unmap": true, 00:09:14.019 "flush": true, 00:09:14.019 "reset": true, 00:09:14.019 "nvme_admin": false, 00:09:14.019 "nvme_io": false, 00:09:14.019 "nvme_io_md": false, 00:09:14.019 "write_zeroes": true, 00:09:14.019 "zcopy": true, 00:09:14.019 "get_zone_info": false, 00:09:14.019 "zone_management": false, 00:09:14.019 "zone_append": false, 00:09:14.019 "compare": false, 00:09:14.019 "compare_and_write": false, 00:09:14.019 "abort": true, 00:09:14.019 "seek_hole": false, 00:09:14.019 "seek_data": false, 00:09:14.019 "copy": true, 00:09:14.019 "nvme_iov_md": false 00:09:14.019 }, 00:09:14.019 "memory_domains": [ 00:09:14.019 { 00:09:14.019 "dma_device_id": "system", 00:09:14.019 "dma_device_type": 1 00:09:14.019 }, 00:09:14.019 { 00:09:14.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.019 "dma_device_type": 2 00:09:14.019 } 00:09:14.019 ], 00:09:14.019 "driver_specific": {} 00:09:14.019 } 00:09:14.019 ] 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.019 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.278 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.278 "name": "Existed_Raid", 00:09:14.278 "uuid": "16ea5c30-0220-46ea-b2a6-dfc8e82d13c8", 00:09:14.278 "strip_size_kb": 64, 00:09:14.278 "state": "online", 00:09:14.278 "raid_level": "concat", 00:09:14.278 "superblock": true, 00:09:14.278 "num_base_bdevs": 2, 00:09:14.278 "num_base_bdevs_discovered": 2, 00:09:14.278 "num_base_bdevs_operational": 2, 00:09:14.278 "base_bdevs_list": [ 00:09:14.278 { 00:09:14.278 "name": "BaseBdev1", 00:09:14.278 "uuid": "f13cdeed-e8b4-46c8-9971-a413efc2b9a9", 00:09:14.278 "is_configured": true, 00:09:14.278 "data_offset": 2048, 00:09:14.278 "data_size": 63488 00:09:14.278 }, 00:09:14.278 { 00:09:14.278 "name": "BaseBdev2", 00:09:14.278 "uuid": "9f320f2c-d985-4e51-b386-62e4aaeadd8c", 00:09:14.278 "is_configured": true, 00:09:14.278 "data_offset": 2048, 00:09:14.278 "data_size": 63488 00:09:14.278 } 00:09:14.278 ] 00:09:14.278 }' 00:09:14.278 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.278 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.536 [2024-10-08 16:17:07.820115] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.536 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.795 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.795 "name": "Existed_Raid", 00:09:14.795 "aliases": [ 00:09:14.795 "16ea5c30-0220-46ea-b2a6-dfc8e82d13c8" 00:09:14.795 ], 00:09:14.795 "product_name": "Raid Volume", 00:09:14.795 "block_size": 512, 00:09:14.795 "num_blocks": 126976, 00:09:14.795 "uuid": "16ea5c30-0220-46ea-b2a6-dfc8e82d13c8", 00:09:14.795 "assigned_rate_limits": { 00:09:14.795 "rw_ios_per_sec": 0, 00:09:14.795 "rw_mbytes_per_sec": 0, 00:09:14.795 "r_mbytes_per_sec": 0, 00:09:14.795 "w_mbytes_per_sec": 0 00:09:14.795 }, 00:09:14.795 "claimed": false, 00:09:14.795 "zoned": false, 00:09:14.795 "supported_io_types": { 00:09:14.795 "read": true, 00:09:14.795 "write": true, 00:09:14.795 "unmap": true, 00:09:14.795 "flush": true, 00:09:14.795 "reset": true, 00:09:14.795 "nvme_admin": false, 00:09:14.795 "nvme_io": false, 00:09:14.795 "nvme_io_md": false, 00:09:14.795 "write_zeroes": true, 00:09:14.795 "zcopy": false, 00:09:14.795 "get_zone_info": false, 00:09:14.795 "zone_management": false, 00:09:14.795 "zone_append": false, 00:09:14.795 "compare": false, 00:09:14.795 "compare_and_write": false, 00:09:14.795 "abort": false, 00:09:14.795 "seek_hole": false, 00:09:14.795 "seek_data": false, 00:09:14.795 "copy": false, 00:09:14.795 "nvme_iov_md": false 00:09:14.795 }, 00:09:14.795 "memory_domains": [ 00:09:14.795 { 00:09:14.795 "dma_device_id": "system", 00:09:14.795 "dma_device_type": 1 00:09:14.795 }, 00:09:14.795 { 00:09:14.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.795 "dma_device_type": 2 00:09:14.795 }, 00:09:14.795 { 00:09:14.795 "dma_device_id": "system", 00:09:14.795 "dma_device_type": 1 00:09:14.795 }, 00:09:14.795 { 00:09:14.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.795 "dma_device_type": 2 00:09:14.795 } 00:09:14.795 ], 00:09:14.795 "driver_specific": { 00:09:14.795 "raid": { 00:09:14.795 "uuid": "16ea5c30-0220-46ea-b2a6-dfc8e82d13c8", 00:09:14.795 "strip_size_kb": 64, 00:09:14.795 "state": "online", 00:09:14.795 "raid_level": "concat", 00:09:14.795 "superblock": true, 00:09:14.795 "num_base_bdevs": 2, 00:09:14.795 "num_base_bdevs_discovered": 2, 00:09:14.795 "num_base_bdevs_operational": 2, 00:09:14.795 "base_bdevs_list": [ 00:09:14.795 { 00:09:14.795 "name": "BaseBdev1", 00:09:14.795 "uuid": "f13cdeed-e8b4-46c8-9971-a413efc2b9a9", 00:09:14.795 "is_configured": true, 00:09:14.795 "data_offset": 2048, 00:09:14.795 "data_size": 63488 00:09:14.795 }, 00:09:14.795 { 00:09:14.795 "name": "BaseBdev2", 00:09:14.795 "uuid": "9f320f2c-d985-4e51-b386-62e4aaeadd8c", 00:09:14.795 "is_configured": true, 00:09:14.795 "data_offset": 2048, 00:09:14.795 "data_size": 63488 00:09:14.795 } 00:09:14.795 ] 00:09:14.795 } 00:09:14.795 } 00:09:14.795 }' 00:09:14.795 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.795 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:14.795 BaseBdev2' 00:09:14.795 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.795 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.795 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.796 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:14.796 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.796 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.796 16:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.796 16:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.796 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.796 [2024-10-08 16:17:08.079888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.796 [2024-10-08 16:17:08.079959] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.796 [2024-10-08 16:17:08.080037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.055 "name": "Existed_Raid", 00:09:15.055 "uuid": "16ea5c30-0220-46ea-b2a6-dfc8e82d13c8", 00:09:15.055 "strip_size_kb": 64, 00:09:15.055 "state": "offline", 00:09:15.055 "raid_level": "concat", 00:09:15.055 "superblock": true, 00:09:15.055 "num_base_bdevs": 2, 00:09:15.055 "num_base_bdevs_discovered": 1, 00:09:15.055 "num_base_bdevs_operational": 1, 00:09:15.055 "base_bdevs_list": [ 00:09:15.055 { 00:09:15.055 "name": null, 00:09:15.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.055 "is_configured": false, 00:09:15.055 "data_offset": 0, 00:09:15.055 "data_size": 63488 00:09:15.055 }, 00:09:15.055 { 00:09:15.055 "name": "BaseBdev2", 00:09:15.055 "uuid": "9f320f2c-d985-4e51-b386-62e4aaeadd8c", 00:09:15.055 "is_configured": true, 00:09:15.055 "data_offset": 2048, 00:09:15.055 "data_size": 63488 00:09:15.055 } 00:09:15.055 ] 00:09:15.055 }' 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.055 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.620 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.620 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.620 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.620 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.620 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.620 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 [2024-10-08 16:17:08.757765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.621 [2024-10-08 16:17:08.757853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62172 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62172 ']' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62172 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62172 00:09:15.621 killing process with pid 62172 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62172' 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62172 00:09:15.621 [2024-10-08 16:17:08.940784] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.621 16:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62172 00:09:15.880 [2024-10-08 16:17:08.956258] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.255 16:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.255 00:09:17.255 real 0m5.781s 00:09:17.255 user 0m8.457s 00:09:17.255 sys 0m0.886s 00:09:17.255 16:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.255 16:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.255 ************************************ 00:09:17.255 END TEST raid_state_function_test_sb 00:09:17.255 ************************************ 00:09:17.255 16:17:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:17.255 16:17:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:17.255 16:17:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.255 16:17:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.255 ************************************ 00:09:17.255 START TEST raid_superblock_test 00:09:17.255 ************************************ 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:17.255 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:17.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62435 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62435 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62435 ']' 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.256 16:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.256 [2024-10-08 16:17:10.443561] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:17.256 [2024-10-08 16:17:10.443752] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62435 ] 00:09:17.519 [2024-10-08 16:17:10.612292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.790 [2024-10-08 16:17:10.885935] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.790 [2024-10-08 16:17:11.107411] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.790 [2024-10-08 16:17:11.107464] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 malloc1 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 [2024-10-08 16:17:11.427106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.357 [2024-10-08 16:17:11.427417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.357 [2024-10-08 16:17:11.427494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:18.357 [2024-10-08 16:17:11.427799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.357 [2024-10-08 16:17:11.430884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.357 [2024-10-08 16:17:11.431044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.357 pt1 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 malloc2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 [2024-10-08 16:17:11.510777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.357 [2024-10-08 16:17:11.511084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.357 [2024-10-08 16:17:11.511155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:18.357 [2024-10-08 16:17:11.511174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.357 [2024-10-08 16:17:11.514925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.357 [2024-10-08 16:17:11.514980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.357 pt2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 [2024-10-08 16:17:11.519292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.357 [2024-10-08 16:17:11.522507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.357 [2024-10-08 16:17:11.522793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:18.357 [2024-10-08 16:17:11.522817] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.357 [2024-10-08 16:17:11.523205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:18.357 [2024-10-08 16:17:11.523435] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:18.357 [2024-10-08 16:17:11.523471] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:18.357 [2024-10-08 16:17:11.523783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.357 "name": "raid_bdev1", 00:09:18.357 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:18.357 "strip_size_kb": 64, 00:09:18.357 "state": "online", 00:09:18.357 "raid_level": "concat", 00:09:18.357 "superblock": true, 00:09:18.357 "num_base_bdevs": 2, 00:09:18.357 "num_base_bdevs_discovered": 2, 00:09:18.357 "num_base_bdevs_operational": 2, 00:09:18.357 "base_bdevs_list": [ 00:09:18.357 { 00:09:18.357 "name": "pt1", 00:09:18.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.357 "is_configured": true, 00:09:18.357 "data_offset": 2048, 00:09:18.357 "data_size": 63488 00:09:18.357 }, 00:09:18.357 { 00:09:18.357 "name": "pt2", 00:09:18.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.357 "is_configured": true, 00:09:18.357 "data_offset": 2048, 00:09:18.357 "data_size": 63488 00:09:18.357 } 00:09:18.357 ] 00:09:18.357 }' 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.357 16:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.923 [2024-10-08 16:17:12.032303] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.923 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.923 "name": "raid_bdev1", 00:09:18.923 "aliases": [ 00:09:18.923 "a1804159-5113-47d6-863b-6732674ed75c" 00:09:18.923 ], 00:09:18.923 "product_name": "Raid Volume", 00:09:18.923 "block_size": 512, 00:09:18.924 "num_blocks": 126976, 00:09:18.924 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:18.924 "assigned_rate_limits": { 00:09:18.924 "rw_ios_per_sec": 0, 00:09:18.924 "rw_mbytes_per_sec": 0, 00:09:18.924 "r_mbytes_per_sec": 0, 00:09:18.924 "w_mbytes_per_sec": 0 00:09:18.924 }, 00:09:18.924 "claimed": false, 00:09:18.924 "zoned": false, 00:09:18.924 "supported_io_types": { 00:09:18.924 "read": true, 00:09:18.924 "write": true, 00:09:18.924 "unmap": true, 00:09:18.924 "flush": true, 00:09:18.924 "reset": true, 00:09:18.924 "nvme_admin": false, 00:09:18.924 "nvme_io": false, 00:09:18.924 "nvme_io_md": false, 00:09:18.924 "write_zeroes": true, 00:09:18.924 "zcopy": false, 00:09:18.924 "get_zone_info": false, 00:09:18.924 "zone_management": false, 00:09:18.924 "zone_append": false, 00:09:18.924 "compare": false, 00:09:18.924 "compare_and_write": false, 00:09:18.924 "abort": false, 00:09:18.924 "seek_hole": false, 00:09:18.924 "seek_data": false, 00:09:18.924 "copy": false, 00:09:18.924 "nvme_iov_md": false 00:09:18.924 }, 00:09:18.924 "memory_domains": [ 00:09:18.924 { 00:09:18.924 "dma_device_id": "system", 00:09:18.924 "dma_device_type": 1 00:09:18.924 }, 00:09:18.924 { 00:09:18.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.924 "dma_device_type": 2 00:09:18.924 }, 00:09:18.924 { 00:09:18.924 "dma_device_id": "system", 00:09:18.924 "dma_device_type": 1 00:09:18.924 }, 00:09:18.924 { 00:09:18.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.924 "dma_device_type": 2 00:09:18.924 } 00:09:18.924 ], 00:09:18.924 "driver_specific": { 00:09:18.924 "raid": { 00:09:18.924 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:18.924 "strip_size_kb": 64, 00:09:18.924 "state": "online", 00:09:18.924 "raid_level": "concat", 00:09:18.924 "superblock": true, 00:09:18.924 "num_base_bdevs": 2, 00:09:18.924 "num_base_bdevs_discovered": 2, 00:09:18.924 "num_base_bdevs_operational": 2, 00:09:18.924 "base_bdevs_list": [ 00:09:18.924 { 00:09:18.924 "name": "pt1", 00:09:18.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.924 "is_configured": true, 00:09:18.924 "data_offset": 2048, 00:09:18.924 "data_size": 63488 00:09:18.924 }, 00:09:18.924 { 00:09:18.924 "name": "pt2", 00:09:18.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.924 "is_configured": true, 00:09:18.924 "data_offset": 2048, 00:09:18.924 "data_size": 63488 00:09:18.924 } 00:09:18.924 ] 00:09:18.924 } 00:09:18.924 } 00:09:18.924 }' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.924 pt2' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.924 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.182 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.182 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:19.183 [2024-10-08 16:17:12.272667] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a1804159-5113-47d6-863b-6732674ed75c 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a1804159-5113-47d6-863b-6732674ed75c ']' 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 [2024-10-08 16:17:12.324341] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.183 [2024-10-08 16:17:12.324410] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.183 [2024-10-08 16:17:12.324597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.183 [2024-10-08 16:17:12.324680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.183 [2024-10-08 16:17:12.324705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 [2024-10-08 16:17:12.452399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:19.183 [2024-10-08 16:17:12.455866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:19.183 [2024-10-08 16:17:12.456118] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:19.183 [2024-10-08 16:17:12.456350] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:19.183 [2024-10-08 16:17:12.456505] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.183 [2024-10-08 16:17:12.456609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:19.183 request: 00:09:19.183 { 00:09:19.183 "name": "raid_bdev1", 00:09:19.183 "raid_level": "concat", 00:09:19.183 "base_bdevs": [ 00:09:19.183 "malloc1", 00:09:19.183 "malloc2" 00:09:19.183 ], 00:09:19.183 "strip_size_kb": 64, 00:09:19.183 "superblock": false, 00:09:19.183 "method": "bdev_raid_create", 00:09:19.183 "req_id": 1 00:09:19.183 } 00:09:19.183 Got JSON-RPC error response 00:09:19.183 response: 00:09:19.183 { 00:09:19.183 "code": -17, 00:09:19.183 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:19.183 } 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.183 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 [2024-10-08 16:17:12.513167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.441 [2024-10-08 16:17:12.513587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.441 [2024-10-08 16:17:12.513635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:19.441 [2024-10-08 16:17:12.513656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.441 [2024-10-08 16:17:12.517284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.441 [2024-10-08 16:17:12.517342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.441 [2024-10-08 16:17:12.517502] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:19.441 [2024-10-08 16:17:12.517625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.441 pt1 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.441 "name": "raid_bdev1", 00:09:19.441 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:19.441 "strip_size_kb": 64, 00:09:19.441 "state": "configuring", 00:09:19.441 "raid_level": "concat", 00:09:19.441 "superblock": true, 00:09:19.441 "num_base_bdevs": 2, 00:09:19.441 "num_base_bdevs_discovered": 1, 00:09:19.441 "num_base_bdevs_operational": 2, 00:09:19.441 "base_bdevs_list": [ 00:09:19.441 { 00:09:19.441 "name": "pt1", 00:09:19.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.442 "is_configured": true, 00:09:19.442 "data_offset": 2048, 00:09:19.442 "data_size": 63488 00:09:19.442 }, 00:09:19.442 { 00:09:19.442 "name": null, 00:09:19.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.442 "is_configured": false, 00:09:19.442 "data_offset": 2048, 00:09:19.442 "data_size": 63488 00:09:19.442 } 00:09:19.442 ] 00:09:19.442 }' 00:09:19.442 16:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.442 16:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.007 [2024-10-08 16:17:13.029762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.007 [2024-10-08 16:17:13.029902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.007 [2024-10-08 16:17:13.029938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:20.007 [2024-10-08 16:17:13.029957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.007 [2024-10-08 16:17:13.030717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.007 [2024-10-08 16:17:13.030764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.007 [2024-10-08 16:17:13.030885] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.007 [2024-10-08 16:17:13.030926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.007 [2024-10-08 16:17:13.031078] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.007 [2024-10-08 16:17:13.031098] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:20.007 [2024-10-08 16:17:13.031403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.007 [2024-10-08 16:17:13.031630] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.007 [2024-10-08 16:17:13.031648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:20.007 [2024-10-08 16:17:13.031826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.007 pt2 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.007 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.008 "name": "raid_bdev1", 00:09:20.008 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:20.008 "strip_size_kb": 64, 00:09:20.008 "state": "online", 00:09:20.008 "raid_level": "concat", 00:09:20.008 "superblock": true, 00:09:20.008 "num_base_bdevs": 2, 00:09:20.008 "num_base_bdevs_discovered": 2, 00:09:20.008 "num_base_bdevs_operational": 2, 00:09:20.008 "base_bdevs_list": [ 00:09:20.008 { 00:09:20.008 "name": "pt1", 00:09:20.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.008 "is_configured": true, 00:09:20.008 "data_offset": 2048, 00:09:20.008 "data_size": 63488 00:09:20.008 }, 00:09:20.008 { 00:09:20.008 "name": "pt2", 00:09:20.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.008 "is_configured": true, 00:09:20.008 "data_offset": 2048, 00:09:20.008 "data_size": 63488 00:09:20.008 } 00:09:20.008 ] 00:09:20.008 }' 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.008 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.266 [2024-10-08 16:17:13.534248] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.266 "name": "raid_bdev1", 00:09:20.266 "aliases": [ 00:09:20.266 "a1804159-5113-47d6-863b-6732674ed75c" 00:09:20.266 ], 00:09:20.266 "product_name": "Raid Volume", 00:09:20.266 "block_size": 512, 00:09:20.266 "num_blocks": 126976, 00:09:20.266 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:20.266 "assigned_rate_limits": { 00:09:20.266 "rw_ios_per_sec": 0, 00:09:20.266 "rw_mbytes_per_sec": 0, 00:09:20.266 "r_mbytes_per_sec": 0, 00:09:20.266 "w_mbytes_per_sec": 0 00:09:20.266 }, 00:09:20.266 "claimed": false, 00:09:20.266 "zoned": false, 00:09:20.266 "supported_io_types": { 00:09:20.266 "read": true, 00:09:20.266 "write": true, 00:09:20.266 "unmap": true, 00:09:20.266 "flush": true, 00:09:20.266 "reset": true, 00:09:20.266 "nvme_admin": false, 00:09:20.266 "nvme_io": false, 00:09:20.266 "nvme_io_md": false, 00:09:20.266 "write_zeroes": true, 00:09:20.266 "zcopy": false, 00:09:20.266 "get_zone_info": false, 00:09:20.266 "zone_management": false, 00:09:20.266 "zone_append": false, 00:09:20.266 "compare": false, 00:09:20.266 "compare_and_write": false, 00:09:20.266 "abort": false, 00:09:20.266 "seek_hole": false, 00:09:20.266 "seek_data": false, 00:09:20.266 "copy": false, 00:09:20.266 "nvme_iov_md": false 00:09:20.266 }, 00:09:20.266 "memory_domains": [ 00:09:20.266 { 00:09:20.266 "dma_device_id": "system", 00:09:20.266 "dma_device_type": 1 00:09:20.266 }, 00:09:20.266 { 00:09:20.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.266 "dma_device_type": 2 00:09:20.266 }, 00:09:20.266 { 00:09:20.266 "dma_device_id": "system", 00:09:20.266 "dma_device_type": 1 00:09:20.266 }, 00:09:20.266 { 00:09:20.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.266 "dma_device_type": 2 00:09:20.266 } 00:09:20.266 ], 00:09:20.266 "driver_specific": { 00:09:20.266 "raid": { 00:09:20.266 "uuid": "a1804159-5113-47d6-863b-6732674ed75c", 00:09:20.266 "strip_size_kb": 64, 00:09:20.266 "state": "online", 00:09:20.266 "raid_level": "concat", 00:09:20.266 "superblock": true, 00:09:20.266 "num_base_bdevs": 2, 00:09:20.266 "num_base_bdevs_discovered": 2, 00:09:20.266 "num_base_bdevs_operational": 2, 00:09:20.266 "base_bdevs_list": [ 00:09:20.266 { 00:09:20.266 "name": "pt1", 00:09:20.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.266 "is_configured": true, 00:09:20.266 "data_offset": 2048, 00:09:20.266 "data_size": 63488 00:09:20.266 }, 00:09:20.266 { 00:09:20.266 "name": "pt2", 00:09:20.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.266 "is_configured": true, 00:09:20.266 "data_offset": 2048, 00:09:20.266 "data_size": 63488 00:09:20.266 } 00:09:20.266 ] 00:09:20.266 } 00:09:20.266 } 00:09:20.266 }' 00:09:20.266 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.525 pt2' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:20.525 [2024-10-08 16:17:13.814312] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.525 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a1804159-5113-47d6-863b-6732674ed75c '!=' a1804159-5113-47d6-863b-6732674ed75c ']' 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62435 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62435 ']' 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62435 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62435 00:09:20.783 killing process with pid 62435 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62435' 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62435 00:09:20.783 [2024-10-08 16:17:13.892652] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.783 16:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62435 00:09:20.783 [2024-10-08 16:17:13.892811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.783 [2024-10-08 16:17:13.892887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.783 [2024-10-08 16:17:13.892922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.783 [2024-10-08 16:17:14.098986] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.157 16:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:22.157 00:09:22.157 real 0m5.103s 00:09:22.157 user 0m7.218s 00:09:22.157 sys 0m0.780s 00:09:22.157 16:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.157 16:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.157 ************************************ 00:09:22.157 END TEST raid_superblock_test 00:09:22.157 ************************************ 00:09:22.459 16:17:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:22.459 16:17:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:22.459 16:17:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.459 16:17:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.459 ************************************ 00:09:22.459 START TEST raid_read_error_test 00:09:22.459 ************************************ 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y8dpIVVp91 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62652 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62652 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62652 ']' 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.459 16:17:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.459 [2024-10-08 16:17:15.612103] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:22.459 [2024-10-08 16:17:15.612286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62652 ] 00:09:22.717 [2024-10-08 16:17:15.785403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.975 [2024-10-08 16:17:16.065296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.975 [2024-10-08 16:17:16.289654] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.975 [2024-10-08 16:17:16.289753] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 BaseBdev1_malloc 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 true 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 [2024-10-08 16:17:16.717347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:23.542 [2024-10-08 16:17:16.717452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.542 [2024-10-08 16:17:16.717481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:23.542 [2024-10-08 16:17:16.717500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.542 [2024-10-08 16:17:16.720554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.542 [2024-10-08 16:17:16.720602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:23.542 BaseBdev1 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 BaseBdev2_malloc 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 true 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 [2024-10-08 16:17:16.790067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.542 [2024-10-08 16:17:16.790168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.542 [2024-10-08 16:17:16.790196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:23.542 [2024-10-08 16:17:16.790214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.542 [2024-10-08 16:17:16.793236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.542 [2024-10-08 16:17:16.793287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.542 BaseBdev2 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 [2024-10-08 16:17:16.798244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.542 [2024-10-08 16:17:16.801142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.542 [2024-10-08 16:17:16.801410] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.542 [2024-10-08 16:17:16.801434] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:23.542 [2024-10-08 16:17:16.801763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:23.542 [2024-10-08 16:17:16.802008] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.542 [2024-10-08 16:17:16.802026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:23.542 [2024-10-08 16:17:16.802275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.542 "name": "raid_bdev1", 00:09:23.542 "uuid": "3e90c7f0-f387-40fc-83a4-6d07d8474cc0", 00:09:23.542 "strip_size_kb": 64, 00:09:23.542 "state": "online", 00:09:23.542 "raid_level": "concat", 00:09:23.542 "superblock": true, 00:09:23.542 "num_base_bdevs": 2, 00:09:23.542 "num_base_bdevs_discovered": 2, 00:09:23.542 "num_base_bdevs_operational": 2, 00:09:23.542 "base_bdevs_list": [ 00:09:23.542 { 00:09:23.542 "name": "BaseBdev1", 00:09:23.542 "uuid": "3466ecdb-7cdc-50a1-9576-81d2fde5d112", 00:09:23.542 "is_configured": true, 00:09:23.542 "data_offset": 2048, 00:09:23.542 "data_size": 63488 00:09:23.542 }, 00:09:23.542 { 00:09:23.542 "name": "BaseBdev2", 00:09:23.542 "uuid": "e27ad568-7c05-5dc2-a83a-e7c423d4c1b7", 00:09:23.542 "is_configured": true, 00:09:23.542 "data_offset": 2048, 00:09:23.542 "data_size": 63488 00:09:23.542 } 00:09:23.542 ] 00:09:23.542 }' 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.542 16:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.109 16:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.109 16:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.367 [2024-10-08 16:17:17.464027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.302 "name": "raid_bdev1", 00:09:25.302 "uuid": "3e90c7f0-f387-40fc-83a4-6d07d8474cc0", 00:09:25.302 "strip_size_kb": 64, 00:09:25.302 "state": "online", 00:09:25.302 "raid_level": "concat", 00:09:25.302 "superblock": true, 00:09:25.302 "num_base_bdevs": 2, 00:09:25.302 "num_base_bdevs_discovered": 2, 00:09:25.302 "num_base_bdevs_operational": 2, 00:09:25.302 "base_bdevs_list": [ 00:09:25.302 { 00:09:25.302 "name": "BaseBdev1", 00:09:25.302 "uuid": "3466ecdb-7cdc-50a1-9576-81d2fde5d112", 00:09:25.302 "is_configured": true, 00:09:25.302 "data_offset": 2048, 00:09:25.302 "data_size": 63488 00:09:25.302 }, 00:09:25.302 { 00:09:25.302 "name": "BaseBdev2", 00:09:25.302 "uuid": "e27ad568-7c05-5dc2-a83a-e7c423d4c1b7", 00:09:25.302 "is_configured": true, 00:09:25.302 "data_offset": 2048, 00:09:25.302 "data_size": 63488 00:09:25.302 } 00:09:25.302 ] 00:09:25.302 }' 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.302 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.867 [2024-10-08 16:17:18.906592] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.867 [2024-10-08 16:17:18.906872] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.867 [2024-10-08 16:17:18.910421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.867 { 00:09:25.867 "results": [ 00:09:25.867 { 00:09:25.867 "job": "raid_bdev1", 00:09:25.867 "core_mask": "0x1", 00:09:25.867 "workload": "randrw", 00:09:25.867 "percentage": 50, 00:09:25.867 "status": "finished", 00:09:25.867 "queue_depth": 1, 00:09:25.867 "io_size": 131072, 00:09:25.867 "runtime": 1.44031, 00:09:25.867 "iops": 9777.061882511403, 00:09:25.867 "mibps": 1222.1327353139254, 00:09:25.867 "io_failed": 1, 00:09:25.867 "io_timeout": 0, 00:09:25.867 "avg_latency_us": 144.03906192508055, 00:09:25.867 "min_latency_us": 43.985454545454544, 00:09:25.867 "max_latency_us": 1846.9236363636364 00:09:25.867 } 00:09:25.867 ], 00:09:25.867 "core_count": 1 00:09:25.867 } 00:09:25.867 [2024-10-08 16:17:18.910712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.867 [2024-10-08 16:17:18.910777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.867 [2024-10-08 16:17:18.910806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62652 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62652 ']' 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62652 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62652 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62652' 00:09:25.867 killing process with pid 62652 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62652 00:09:25.867 [2024-10-08 16:17:18.952374] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.867 16:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62652 00:09:25.867 [2024-10-08 16:17:19.081385] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y8dpIVVp91 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:09:27.241 00:09:27.241 real 0m4.992s 00:09:27.241 user 0m6.184s 00:09:27.241 sys 0m0.640s 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.241 ************************************ 00:09:27.241 END TEST raid_read_error_test 00:09:27.241 ************************************ 00:09:27.241 16:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.241 16:17:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:27.241 16:17:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:27.241 16:17:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.241 16:17:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.241 ************************************ 00:09:27.241 START TEST raid_write_error_test 00:09:27.241 ************************************ 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kHTxtcUVaG 00:09:27.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62798 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62798 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62798 ']' 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.241 16:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.499 [2024-10-08 16:17:20.676288] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:27.499 [2024-10-08 16:17:20.676671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62798 ] 00:09:27.756 [2024-10-08 16:17:20.877976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.014 [2024-10-08 16:17:21.150966] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.272 [2024-10-08 16:17:21.373121] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.272 [2024-10-08 16:17:21.373453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.530 BaseBdev1_malloc 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.530 true 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.530 [2024-10-08 16:17:21.788494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.530 [2024-10-08 16:17:21.788588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.530 [2024-10-08 16:17:21.788616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.530 [2024-10-08 16:17:21.788635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.530 [2024-10-08 16:17:21.791634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.530 [2024-10-08 16:17:21.791684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.530 BaseBdev1 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.530 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.789 BaseBdev2_malloc 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.789 true 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.789 [2024-10-08 16:17:21.872535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.789 [2024-10-08 16:17:21.872629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.789 [2024-10-08 16:17:21.872657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.789 [2024-10-08 16:17:21.872675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.789 [2024-10-08 16:17:21.875626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.789 [2024-10-08 16:17:21.875707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.789 BaseBdev2 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.789 [2024-10-08 16:17:21.884659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.789 [2024-10-08 16:17:21.887286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.789 [2024-10-08 16:17:21.887567] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.789 [2024-10-08 16:17:21.887593] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:28.789 [2024-10-08 16:17:21.887903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.789 [2024-10-08 16:17:21.888128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.789 [2024-10-08 16:17:21.888145] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:28.789 [2024-10-08 16:17:21.888349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.789 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.789 "name": "raid_bdev1", 00:09:28.789 "uuid": "29ad28d0-25b1-4471-b62c-c24013f74362", 00:09:28.789 "strip_size_kb": 64, 00:09:28.789 "state": "online", 00:09:28.790 "raid_level": "concat", 00:09:28.790 "superblock": true, 00:09:28.790 "num_base_bdevs": 2, 00:09:28.790 "num_base_bdevs_discovered": 2, 00:09:28.790 "num_base_bdevs_operational": 2, 00:09:28.790 "base_bdevs_list": [ 00:09:28.790 { 00:09:28.790 "name": "BaseBdev1", 00:09:28.790 "uuid": "f5bb4c0e-5b6f-5b2e-b3e6-2e160ac84cc6", 00:09:28.790 "is_configured": true, 00:09:28.790 "data_offset": 2048, 00:09:28.790 "data_size": 63488 00:09:28.790 }, 00:09:28.790 { 00:09:28.790 "name": "BaseBdev2", 00:09:28.790 "uuid": "e6638151-351b-5cd3-81df-8518f657dd03", 00:09:28.790 "is_configured": true, 00:09:28.790 "data_offset": 2048, 00:09:28.790 "data_size": 63488 00:09:28.790 } 00:09:28.790 ] 00:09:28.790 }' 00:09:28.790 16:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.790 16:17:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.356 16:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:29.356 16:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.356 [2024-10-08 16:17:22.514360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.292 "name": "raid_bdev1", 00:09:30.292 "uuid": "29ad28d0-25b1-4471-b62c-c24013f74362", 00:09:30.292 "strip_size_kb": 64, 00:09:30.292 "state": "online", 00:09:30.292 "raid_level": "concat", 00:09:30.292 "superblock": true, 00:09:30.292 "num_base_bdevs": 2, 00:09:30.292 "num_base_bdevs_discovered": 2, 00:09:30.292 "num_base_bdevs_operational": 2, 00:09:30.292 "base_bdevs_list": [ 00:09:30.292 { 00:09:30.292 "name": "BaseBdev1", 00:09:30.292 "uuid": "f5bb4c0e-5b6f-5b2e-b3e6-2e160ac84cc6", 00:09:30.292 "is_configured": true, 00:09:30.292 "data_offset": 2048, 00:09:30.292 "data_size": 63488 00:09:30.292 }, 00:09:30.292 { 00:09:30.292 "name": "BaseBdev2", 00:09:30.292 "uuid": "e6638151-351b-5cd3-81df-8518f657dd03", 00:09:30.292 "is_configured": true, 00:09:30.292 "data_offset": 2048, 00:09:30.292 "data_size": 63488 00:09:30.292 } 00:09:30.292 ] 00:09:30.292 }' 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.292 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 [2024-10-08 16:17:23.912204] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.858 [2024-10-08 16:17:23.912400] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.858 [2024-10-08 16:17:23.915837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.858 [2024-10-08 16:17:23.915897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.858 [2024-10-08 16:17:23.915947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.858 [2024-10-08 16:17:23.915968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:30.858 { 00:09:30.858 "results": [ 00:09:30.858 { 00:09:30.858 "job": "raid_bdev1", 00:09:30.858 "core_mask": "0x1", 00:09:30.858 "workload": "randrw", 00:09:30.858 "percentage": 50, 00:09:30.858 "status": "finished", 00:09:30.858 "queue_depth": 1, 00:09:30.858 "io_size": 131072, 00:09:30.858 "runtime": 1.395389, 00:09:30.858 "iops": 10214.355996786559, 00:09:30.858 "mibps": 1276.7944995983198, 00:09:30.858 "io_failed": 1, 00:09:30.858 "io_timeout": 0, 00:09:30.858 "avg_latency_us": 137.93092694873528, 00:09:30.858 "min_latency_us": 41.658181818181816, 00:09:30.858 "max_latency_us": 1861.8181818181818 00:09:30.858 } 00:09:30.858 ], 00:09:30.858 "core_count": 1 00:09:30.858 } 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62798 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62798 ']' 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62798 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62798 00:09:30.858 killing process with pid 62798 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62798' 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62798 00:09:30.858 16:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62798 00:09:30.858 [2024-10-08 16:17:23.964108] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.858 [2024-10-08 16:17:24.097614] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kHTxtcUVaG 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.235 ************************************ 00:09:32.235 END TEST raid_write_error_test 00:09:32.235 ************************************ 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:32.235 00:09:32.235 real 0m4.930s 00:09:32.235 user 0m6.048s 00:09:32.235 sys 0m0.655s 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.235 16:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 16:17:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.235 16:17:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:32.235 16:17:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:32.235 16:17:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.235 16:17:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 ************************************ 00:09:32.235 START TEST raid_state_function_test 00:09:32.235 ************************************ 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62947 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.235 Process raid pid: 62947 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62947' 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62947 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62947 ']' 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.235 16:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.492 [2024-10-08 16:17:25.635233] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:32.492 [2024-10-08 16:17:25.635413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.492 [2024-10-08 16:17:25.804033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.057 [2024-10-08 16:17:26.078494] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.057 [2024-10-08 16:17:26.304278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.057 [2024-10-08 16:17:26.304343] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.623 [2024-10-08 16:17:26.677974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.623 [2024-10-08 16:17:26.678046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.623 [2024-10-08 16:17:26.678065] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.623 [2024-10-08 16:17:26.678085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.623 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.624 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.624 "name": "Existed_Raid", 00:09:33.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.624 "strip_size_kb": 0, 00:09:33.624 "state": "configuring", 00:09:33.624 "raid_level": "raid1", 00:09:33.624 "superblock": false, 00:09:33.624 "num_base_bdevs": 2, 00:09:33.624 "num_base_bdevs_discovered": 0, 00:09:33.624 "num_base_bdevs_operational": 2, 00:09:33.624 "base_bdevs_list": [ 00:09:33.624 { 00:09:33.624 "name": "BaseBdev1", 00:09:33.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.624 "is_configured": false, 00:09:33.624 "data_offset": 0, 00:09:33.624 "data_size": 0 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "name": "BaseBdev2", 00:09:33.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.624 "is_configured": false, 00:09:33.624 "data_offset": 0, 00:09:33.624 "data_size": 0 00:09:33.624 } 00:09:33.624 ] 00:09:33.624 }' 00:09:33.624 16:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.624 16:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.881 [2024-10-08 16:17:27.198025] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.881 [2024-10-08 16:17:27.198079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.881 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.140 [2024-10-08 16:17:27.206003] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.140 [2024-10-08 16:17:27.206060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.140 [2024-10-08 16:17:27.206077] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.140 [2024-10-08 16:17:27.206097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.140 [2024-10-08 16:17:27.262797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.140 BaseBdev1 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.140 [ 00:09:34.140 { 00:09:34.140 "name": "BaseBdev1", 00:09:34.140 "aliases": [ 00:09:34.140 "2ef4ca0f-8e9e-41b9-b2dd-3cd3ae5ff4c0" 00:09:34.140 ], 00:09:34.140 "product_name": "Malloc disk", 00:09:34.140 "block_size": 512, 00:09:34.140 "num_blocks": 65536, 00:09:34.140 "uuid": "2ef4ca0f-8e9e-41b9-b2dd-3cd3ae5ff4c0", 00:09:34.140 "assigned_rate_limits": { 00:09:34.140 "rw_ios_per_sec": 0, 00:09:34.140 "rw_mbytes_per_sec": 0, 00:09:34.140 "r_mbytes_per_sec": 0, 00:09:34.140 "w_mbytes_per_sec": 0 00:09:34.140 }, 00:09:34.140 "claimed": true, 00:09:34.140 "claim_type": "exclusive_write", 00:09:34.140 "zoned": false, 00:09:34.140 "supported_io_types": { 00:09:34.140 "read": true, 00:09:34.140 "write": true, 00:09:34.140 "unmap": true, 00:09:34.140 "flush": true, 00:09:34.140 "reset": true, 00:09:34.140 "nvme_admin": false, 00:09:34.140 "nvme_io": false, 00:09:34.140 "nvme_io_md": false, 00:09:34.140 "write_zeroes": true, 00:09:34.140 "zcopy": true, 00:09:34.140 "get_zone_info": false, 00:09:34.140 "zone_management": false, 00:09:34.140 "zone_append": false, 00:09:34.140 "compare": false, 00:09:34.140 "compare_and_write": false, 00:09:34.140 "abort": true, 00:09:34.140 "seek_hole": false, 00:09:34.140 "seek_data": false, 00:09:34.140 "copy": true, 00:09:34.140 "nvme_iov_md": false 00:09:34.140 }, 00:09:34.140 "memory_domains": [ 00:09:34.140 { 00:09:34.140 "dma_device_id": "system", 00:09:34.140 "dma_device_type": 1 00:09:34.140 }, 00:09:34.140 { 00:09:34.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.140 "dma_device_type": 2 00:09:34.140 } 00:09:34.140 ], 00:09:34.140 "driver_specific": {} 00:09:34.140 } 00:09:34.140 ] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.140 "name": "Existed_Raid", 00:09:34.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.140 "strip_size_kb": 0, 00:09:34.140 "state": "configuring", 00:09:34.140 "raid_level": "raid1", 00:09:34.140 "superblock": false, 00:09:34.140 "num_base_bdevs": 2, 00:09:34.140 "num_base_bdevs_discovered": 1, 00:09:34.140 "num_base_bdevs_operational": 2, 00:09:34.140 "base_bdevs_list": [ 00:09:34.140 { 00:09:34.140 "name": "BaseBdev1", 00:09:34.140 "uuid": "2ef4ca0f-8e9e-41b9-b2dd-3cd3ae5ff4c0", 00:09:34.140 "is_configured": true, 00:09:34.140 "data_offset": 0, 00:09:34.140 "data_size": 65536 00:09:34.140 }, 00:09:34.140 { 00:09:34.140 "name": "BaseBdev2", 00:09:34.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.140 "is_configured": false, 00:09:34.140 "data_offset": 0, 00:09:34.140 "data_size": 0 00:09:34.140 } 00:09:34.140 ] 00:09:34.140 }' 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.140 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.721 [2024-10-08 16:17:27.823013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.721 [2024-10-08 16:17:27.823096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.721 [2024-10-08 16:17:27.835097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.721 [2024-10-08 16:17:27.837938] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.721 [2024-10-08 16:17:27.838111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.721 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.722 "name": "Existed_Raid", 00:09:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.722 "strip_size_kb": 0, 00:09:34.722 "state": "configuring", 00:09:34.722 "raid_level": "raid1", 00:09:34.722 "superblock": false, 00:09:34.722 "num_base_bdevs": 2, 00:09:34.722 "num_base_bdevs_discovered": 1, 00:09:34.722 "num_base_bdevs_operational": 2, 00:09:34.722 "base_bdevs_list": [ 00:09:34.722 { 00:09:34.722 "name": "BaseBdev1", 00:09:34.722 "uuid": "2ef4ca0f-8e9e-41b9-b2dd-3cd3ae5ff4c0", 00:09:34.722 "is_configured": true, 00:09:34.722 "data_offset": 0, 00:09:34.722 "data_size": 65536 00:09:34.722 }, 00:09:34.722 { 00:09:34.722 "name": "BaseBdev2", 00:09:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.722 "is_configured": false, 00:09:34.722 "data_offset": 0, 00:09:34.722 "data_size": 0 00:09:34.722 } 00:09:34.722 ] 00:09:34.722 }' 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.722 16:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.285 [2024-10-08 16:17:28.386163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.285 [2024-10-08 16:17:28.386492] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.285 [2024-10-08 16:17:28.386536] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.285 [2024-10-08 16:17:28.386906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:35.285 [2024-10-08 16:17:28.387128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.285 [2024-10-08 16:17:28.387153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.285 [2024-10-08 16:17:28.387511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.285 BaseBdev2 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.285 [ 00:09:35.285 { 00:09:35.285 "name": "BaseBdev2", 00:09:35.285 "aliases": [ 00:09:35.285 "b1539e41-2628-4719-8f75-39443ae1bd91" 00:09:35.285 ], 00:09:35.285 "product_name": "Malloc disk", 00:09:35.285 "block_size": 512, 00:09:35.285 "num_blocks": 65536, 00:09:35.285 "uuid": "b1539e41-2628-4719-8f75-39443ae1bd91", 00:09:35.285 "assigned_rate_limits": { 00:09:35.285 "rw_ios_per_sec": 0, 00:09:35.285 "rw_mbytes_per_sec": 0, 00:09:35.285 "r_mbytes_per_sec": 0, 00:09:35.285 "w_mbytes_per_sec": 0 00:09:35.285 }, 00:09:35.285 "claimed": true, 00:09:35.285 "claim_type": "exclusive_write", 00:09:35.285 "zoned": false, 00:09:35.285 "supported_io_types": { 00:09:35.285 "read": true, 00:09:35.285 "write": true, 00:09:35.285 "unmap": true, 00:09:35.285 "flush": true, 00:09:35.285 "reset": true, 00:09:35.285 "nvme_admin": false, 00:09:35.285 "nvme_io": false, 00:09:35.285 "nvme_io_md": false, 00:09:35.285 "write_zeroes": true, 00:09:35.285 "zcopy": true, 00:09:35.285 "get_zone_info": false, 00:09:35.285 "zone_management": false, 00:09:35.285 "zone_append": false, 00:09:35.285 "compare": false, 00:09:35.285 "compare_and_write": false, 00:09:35.285 "abort": true, 00:09:35.285 "seek_hole": false, 00:09:35.285 "seek_data": false, 00:09:35.285 "copy": true, 00:09:35.285 "nvme_iov_md": false 00:09:35.285 }, 00:09:35.285 "memory_domains": [ 00:09:35.285 { 00:09:35.285 "dma_device_id": "system", 00:09:35.285 "dma_device_type": 1 00:09:35.285 }, 00:09:35.285 { 00:09:35.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.285 "dma_device_type": 2 00:09:35.285 } 00:09:35.285 ], 00:09:35.285 "driver_specific": {} 00:09:35.285 } 00:09:35.285 ] 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.285 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.286 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.286 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.286 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.286 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.286 "name": "Existed_Raid", 00:09:35.286 "uuid": "444d6839-5191-4687-a5cf-12925d59213f", 00:09:35.286 "strip_size_kb": 0, 00:09:35.286 "state": "online", 00:09:35.286 "raid_level": "raid1", 00:09:35.286 "superblock": false, 00:09:35.286 "num_base_bdevs": 2, 00:09:35.286 "num_base_bdevs_discovered": 2, 00:09:35.286 "num_base_bdevs_operational": 2, 00:09:35.286 "base_bdevs_list": [ 00:09:35.286 { 00:09:35.286 "name": "BaseBdev1", 00:09:35.286 "uuid": "2ef4ca0f-8e9e-41b9-b2dd-3cd3ae5ff4c0", 00:09:35.286 "is_configured": true, 00:09:35.286 "data_offset": 0, 00:09:35.286 "data_size": 65536 00:09:35.286 }, 00:09:35.286 { 00:09:35.286 "name": "BaseBdev2", 00:09:35.286 "uuid": "b1539e41-2628-4719-8f75-39443ae1bd91", 00:09:35.286 "is_configured": true, 00:09:35.286 "data_offset": 0, 00:09:35.286 "data_size": 65536 00:09:35.286 } 00:09:35.286 ] 00:09:35.286 }' 00:09:35.286 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.286 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.852 [2024-10-08 16:17:28.934860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.852 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.852 "name": "Existed_Raid", 00:09:35.852 "aliases": [ 00:09:35.852 "444d6839-5191-4687-a5cf-12925d59213f" 00:09:35.852 ], 00:09:35.852 "product_name": "Raid Volume", 00:09:35.852 "block_size": 512, 00:09:35.852 "num_blocks": 65536, 00:09:35.852 "uuid": "444d6839-5191-4687-a5cf-12925d59213f", 00:09:35.852 "assigned_rate_limits": { 00:09:35.852 "rw_ios_per_sec": 0, 00:09:35.852 "rw_mbytes_per_sec": 0, 00:09:35.852 "r_mbytes_per_sec": 0, 00:09:35.852 "w_mbytes_per_sec": 0 00:09:35.852 }, 00:09:35.852 "claimed": false, 00:09:35.852 "zoned": false, 00:09:35.852 "supported_io_types": { 00:09:35.852 "read": true, 00:09:35.852 "write": true, 00:09:35.852 "unmap": false, 00:09:35.852 "flush": false, 00:09:35.852 "reset": true, 00:09:35.852 "nvme_admin": false, 00:09:35.852 "nvme_io": false, 00:09:35.852 "nvme_io_md": false, 00:09:35.852 "write_zeroes": true, 00:09:35.852 "zcopy": false, 00:09:35.852 "get_zone_info": false, 00:09:35.852 "zone_management": false, 00:09:35.852 "zone_append": false, 00:09:35.852 "compare": false, 00:09:35.852 "compare_and_write": false, 00:09:35.852 "abort": false, 00:09:35.852 "seek_hole": false, 00:09:35.852 "seek_data": false, 00:09:35.852 "copy": false, 00:09:35.852 "nvme_iov_md": false 00:09:35.852 }, 00:09:35.852 "memory_domains": [ 00:09:35.852 { 00:09:35.852 "dma_device_id": "system", 00:09:35.852 "dma_device_type": 1 00:09:35.852 }, 00:09:35.852 { 00:09:35.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.852 "dma_device_type": 2 00:09:35.852 }, 00:09:35.852 { 00:09:35.852 "dma_device_id": "system", 00:09:35.852 "dma_device_type": 1 00:09:35.852 }, 00:09:35.852 { 00:09:35.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.852 "dma_device_type": 2 00:09:35.852 } 00:09:35.852 ], 00:09:35.852 "driver_specific": { 00:09:35.852 "raid": { 00:09:35.852 "uuid": "444d6839-5191-4687-a5cf-12925d59213f", 00:09:35.852 "strip_size_kb": 0, 00:09:35.852 "state": "online", 00:09:35.852 "raid_level": "raid1", 00:09:35.852 "superblock": false, 00:09:35.852 "num_base_bdevs": 2, 00:09:35.852 "num_base_bdevs_discovered": 2, 00:09:35.853 "num_base_bdevs_operational": 2, 00:09:35.853 "base_bdevs_list": [ 00:09:35.853 { 00:09:35.853 "name": "BaseBdev1", 00:09:35.853 "uuid": "2ef4ca0f-8e9e-41b9-b2dd-3cd3ae5ff4c0", 00:09:35.853 "is_configured": true, 00:09:35.853 "data_offset": 0, 00:09:35.853 "data_size": 65536 00:09:35.853 }, 00:09:35.853 { 00:09:35.853 "name": "BaseBdev2", 00:09:35.853 "uuid": "b1539e41-2628-4719-8f75-39443ae1bd91", 00:09:35.853 "is_configured": true, 00:09:35.853 "data_offset": 0, 00:09:35.853 "data_size": 65536 00:09:35.853 } 00:09:35.853 ] 00:09:35.853 } 00:09:35.853 } 00:09:35.853 }' 00:09:35.853 16:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.853 BaseBdev2' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.853 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.112 [2024-10-08 16:17:29.198571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.112 "name": "Existed_Raid", 00:09:36.112 "uuid": "444d6839-5191-4687-a5cf-12925d59213f", 00:09:36.112 "strip_size_kb": 0, 00:09:36.112 "state": "online", 00:09:36.112 "raid_level": "raid1", 00:09:36.112 "superblock": false, 00:09:36.112 "num_base_bdevs": 2, 00:09:36.112 "num_base_bdevs_discovered": 1, 00:09:36.112 "num_base_bdevs_operational": 1, 00:09:36.112 "base_bdevs_list": [ 00:09:36.112 { 00:09:36.112 "name": null, 00:09:36.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.112 "is_configured": false, 00:09:36.112 "data_offset": 0, 00:09:36.112 "data_size": 65536 00:09:36.112 }, 00:09:36.112 { 00:09:36.112 "name": "BaseBdev2", 00:09:36.112 "uuid": "b1539e41-2628-4719-8f75-39443ae1bd91", 00:09:36.112 "is_configured": true, 00:09:36.112 "data_offset": 0, 00:09:36.112 "data_size": 65536 00:09:36.112 } 00:09:36.112 ] 00:09:36.112 }' 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.112 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.679 [2024-10-08 16:17:29.823661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.679 [2024-10-08 16:17:29.823961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.679 [2024-10-08 16:17:29.923978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.679 [2024-10-08 16:17:29.924071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.679 [2024-10-08 16:17:29.924093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62947 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62947 ']' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62947 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.679 16:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62947 00:09:36.945 killing process with pid 62947 00:09:36.945 16:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.945 16:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.945 16:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62947' 00:09:36.945 16:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62947 00:09:36.945 [2024-10-08 16:17:30.010026] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.945 16:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62947 00:09:36.945 [2024-10-08 16:17:30.026529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.338 00:09:38.338 real 0m5.840s 00:09:38.338 user 0m8.567s 00:09:38.338 sys 0m0.872s 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.338 ************************************ 00:09:38.338 END TEST raid_state_function_test 00:09:38.338 ************************************ 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.338 16:17:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:38.338 16:17:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:38.338 16:17:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.338 16:17:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.338 ************************************ 00:09:38.338 START TEST raid_state_function_test_sb 00:09:38.338 ************************************ 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63212 00:09:38.338 Process raid pid: 63212 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63212' 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63212 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63212 ']' 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.338 16:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.338 [2024-10-08 16:17:31.538621] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:38.338 [2024-10-08 16:17:31.539115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.596 [2024-10-08 16:17:31.711168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.854 [2024-10-08 16:17:31.973806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.113 [2024-10-08 16:17:32.179558] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.113 [2024-10-08 16:17:32.179619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.371 [2024-10-08 16:17:32.495673] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.371 [2024-10-08 16:17:32.495755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.371 [2024-10-08 16:17:32.495772] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.371 [2024-10-08 16:17:32.495791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.371 "name": "Existed_Raid", 00:09:39.371 "uuid": "e2ec6651-ada0-4c53-b04c-46859cfca290", 00:09:39.371 "strip_size_kb": 0, 00:09:39.371 "state": "configuring", 00:09:39.371 "raid_level": "raid1", 00:09:39.371 "superblock": true, 00:09:39.371 "num_base_bdevs": 2, 00:09:39.371 "num_base_bdevs_discovered": 0, 00:09:39.371 "num_base_bdevs_operational": 2, 00:09:39.371 "base_bdevs_list": [ 00:09:39.371 { 00:09:39.371 "name": "BaseBdev1", 00:09:39.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.371 "is_configured": false, 00:09:39.371 "data_offset": 0, 00:09:39.371 "data_size": 0 00:09:39.371 }, 00:09:39.371 { 00:09:39.371 "name": "BaseBdev2", 00:09:39.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.371 "is_configured": false, 00:09:39.371 "data_offset": 0, 00:09:39.371 "data_size": 0 00:09:39.371 } 00:09:39.371 ] 00:09:39.371 }' 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.371 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 [2024-10-08 16:17:32.991748] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.951 [2024-10-08 16:17:32.992046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.951 16:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 [2024-10-08 16:17:32.999721] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.951 [2024-10-08 16:17:32.999777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.951 [2024-10-08 16:17:32.999793] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.951 [2024-10-08 16:17:32.999812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 [2024-10-08 16:17:33.058831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.951 BaseBdev1 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 [ 00:09:39.951 { 00:09:39.951 "name": "BaseBdev1", 00:09:39.951 "aliases": [ 00:09:39.951 "cffc002a-d56e-4875-8aab-4fdd6a77aa52" 00:09:39.951 ], 00:09:39.951 "product_name": "Malloc disk", 00:09:39.951 "block_size": 512, 00:09:39.951 "num_blocks": 65536, 00:09:39.951 "uuid": "cffc002a-d56e-4875-8aab-4fdd6a77aa52", 00:09:39.951 "assigned_rate_limits": { 00:09:39.951 "rw_ios_per_sec": 0, 00:09:39.951 "rw_mbytes_per_sec": 0, 00:09:39.951 "r_mbytes_per_sec": 0, 00:09:39.951 "w_mbytes_per_sec": 0 00:09:39.951 }, 00:09:39.951 "claimed": true, 00:09:39.951 "claim_type": "exclusive_write", 00:09:39.951 "zoned": false, 00:09:39.951 "supported_io_types": { 00:09:39.951 "read": true, 00:09:39.951 "write": true, 00:09:39.951 "unmap": true, 00:09:39.951 "flush": true, 00:09:39.951 "reset": true, 00:09:39.951 "nvme_admin": false, 00:09:39.951 "nvme_io": false, 00:09:39.951 "nvme_io_md": false, 00:09:39.951 "write_zeroes": true, 00:09:39.951 "zcopy": true, 00:09:39.951 "get_zone_info": false, 00:09:39.951 "zone_management": false, 00:09:39.951 "zone_append": false, 00:09:39.951 "compare": false, 00:09:39.951 "compare_and_write": false, 00:09:39.951 "abort": true, 00:09:39.951 "seek_hole": false, 00:09:39.951 "seek_data": false, 00:09:39.951 "copy": true, 00:09:39.951 "nvme_iov_md": false 00:09:39.951 }, 00:09:39.951 "memory_domains": [ 00:09:39.951 { 00:09:39.951 "dma_device_id": "system", 00:09:39.951 "dma_device_type": 1 00:09:39.951 }, 00:09:39.951 { 00:09:39.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.951 "dma_device_type": 2 00:09:39.951 } 00:09:39.951 ], 00:09:39.951 "driver_specific": {} 00:09:39.951 } 00:09:39.951 ] 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.951 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.952 "name": "Existed_Raid", 00:09:39.952 "uuid": "6b42d747-8023-426a-80ca-7a0da5c54593", 00:09:39.952 "strip_size_kb": 0, 00:09:39.952 "state": "configuring", 00:09:39.952 "raid_level": "raid1", 00:09:39.952 "superblock": true, 00:09:39.952 "num_base_bdevs": 2, 00:09:39.952 "num_base_bdevs_discovered": 1, 00:09:39.952 "num_base_bdevs_operational": 2, 00:09:39.952 "base_bdevs_list": [ 00:09:39.952 { 00:09:39.952 "name": "BaseBdev1", 00:09:39.952 "uuid": "cffc002a-d56e-4875-8aab-4fdd6a77aa52", 00:09:39.952 "is_configured": true, 00:09:39.952 "data_offset": 2048, 00:09:39.952 "data_size": 63488 00:09:39.952 }, 00:09:39.952 { 00:09:39.952 "name": "BaseBdev2", 00:09:39.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.952 "is_configured": false, 00:09:39.952 "data_offset": 0, 00:09:39.952 "data_size": 0 00:09:39.952 } 00:09:39.952 ] 00:09:39.952 }' 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.952 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.518 [2024-10-08 16:17:33.591066] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.518 [2024-10-08 16:17:33.591442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.518 [2024-10-08 16:17:33.603037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.518 [2024-10-08 16:17:33.605697] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.518 [2024-10-08 16:17:33.605898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.518 "name": "Existed_Raid", 00:09:40.518 "uuid": "86816652-431b-4d17-b9fe-68c348216bcb", 00:09:40.518 "strip_size_kb": 0, 00:09:40.518 "state": "configuring", 00:09:40.518 "raid_level": "raid1", 00:09:40.518 "superblock": true, 00:09:40.518 "num_base_bdevs": 2, 00:09:40.518 "num_base_bdevs_discovered": 1, 00:09:40.518 "num_base_bdevs_operational": 2, 00:09:40.518 "base_bdevs_list": [ 00:09:40.518 { 00:09:40.518 "name": "BaseBdev1", 00:09:40.518 "uuid": "cffc002a-d56e-4875-8aab-4fdd6a77aa52", 00:09:40.518 "is_configured": true, 00:09:40.518 "data_offset": 2048, 00:09:40.518 "data_size": 63488 00:09:40.518 }, 00:09:40.518 { 00:09:40.518 "name": "BaseBdev2", 00:09:40.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.518 "is_configured": false, 00:09:40.518 "data_offset": 0, 00:09:40.518 "data_size": 0 00:09:40.518 } 00:09:40.518 ] 00:09:40.518 }' 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.518 16:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.777 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.777 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.777 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.035 [2024-10-08 16:17:34.129848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.035 [2024-10-08 16:17:34.130167] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.035 [2024-10-08 16:17:34.130187] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.035 [2024-10-08 16:17:34.130508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.035 BaseBdev2 00:09:41.035 [2024-10-08 16:17:34.130715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.035 [2024-10-08 16:17:34.130737] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:41.035 [2024-10-08 16:17:34.130911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.035 [ 00:09:41.035 { 00:09:41.035 "name": "BaseBdev2", 00:09:41.035 "aliases": [ 00:09:41.035 "7de8d13e-3563-4048-a1f7-2ebc6ef52f7e" 00:09:41.035 ], 00:09:41.035 "product_name": "Malloc disk", 00:09:41.035 "block_size": 512, 00:09:41.035 "num_blocks": 65536, 00:09:41.035 "uuid": "7de8d13e-3563-4048-a1f7-2ebc6ef52f7e", 00:09:41.035 "assigned_rate_limits": { 00:09:41.035 "rw_ios_per_sec": 0, 00:09:41.035 "rw_mbytes_per_sec": 0, 00:09:41.035 "r_mbytes_per_sec": 0, 00:09:41.035 "w_mbytes_per_sec": 0 00:09:41.035 }, 00:09:41.035 "claimed": true, 00:09:41.035 "claim_type": "exclusive_write", 00:09:41.035 "zoned": false, 00:09:41.035 "supported_io_types": { 00:09:41.035 "read": true, 00:09:41.035 "write": true, 00:09:41.035 "unmap": true, 00:09:41.035 "flush": true, 00:09:41.035 "reset": true, 00:09:41.035 "nvme_admin": false, 00:09:41.035 "nvme_io": false, 00:09:41.035 "nvme_io_md": false, 00:09:41.035 "write_zeroes": true, 00:09:41.035 "zcopy": true, 00:09:41.035 "get_zone_info": false, 00:09:41.035 "zone_management": false, 00:09:41.035 "zone_append": false, 00:09:41.035 "compare": false, 00:09:41.035 "compare_and_write": false, 00:09:41.035 "abort": true, 00:09:41.035 "seek_hole": false, 00:09:41.035 "seek_data": false, 00:09:41.035 "copy": true, 00:09:41.035 "nvme_iov_md": false 00:09:41.035 }, 00:09:41.035 "memory_domains": [ 00:09:41.035 { 00:09:41.035 "dma_device_id": "system", 00:09:41.035 "dma_device_type": 1 00:09:41.035 }, 00:09:41.035 { 00:09:41.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.035 "dma_device_type": 2 00:09:41.035 } 00:09:41.035 ], 00:09:41.035 "driver_specific": {} 00:09:41.035 } 00:09:41.035 ] 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.035 "name": "Existed_Raid", 00:09:41.035 "uuid": "86816652-431b-4d17-b9fe-68c348216bcb", 00:09:41.035 "strip_size_kb": 0, 00:09:41.035 "state": "online", 00:09:41.035 "raid_level": "raid1", 00:09:41.035 "superblock": true, 00:09:41.035 "num_base_bdevs": 2, 00:09:41.035 "num_base_bdevs_discovered": 2, 00:09:41.035 "num_base_bdevs_operational": 2, 00:09:41.035 "base_bdevs_list": [ 00:09:41.035 { 00:09:41.035 "name": "BaseBdev1", 00:09:41.035 "uuid": "cffc002a-d56e-4875-8aab-4fdd6a77aa52", 00:09:41.035 "is_configured": true, 00:09:41.035 "data_offset": 2048, 00:09:41.035 "data_size": 63488 00:09:41.035 }, 00:09:41.035 { 00:09:41.035 "name": "BaseBdev2", 00:09:41.035 "uuid": "7de8d13e-3563-4048-a1f7-2ebc6ef52f7e", 00:09:41.035 "is_configured": true, 00:09:41.035 "data_offset": 2048, 00:09:41.035 "data_size": 63488 00:09:41.035 } 00:09:41.035 ] 00:09:41.035 }' 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.035 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.601 [2024-10-08 16:17:34.650442] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.601 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.601 "name": "Existed_Raid", 00:09:41.601 "aliases": [ 00:09:41.601 "86816652-431b-4d17-b9fe-68c348216bcb" 00:09:41.601 ], 00:09:41.601 "product_name": "Raid Volume", 00:09:41.601 "block_size": 512, 00:09:41.601 "num_blocks": 63488, 00:09:41.601 "uuid": "86816652-431b-4d17-b9fe-68c348216bcb", 00:09:41.601 "assigned_rate_limits": { 00:09:41.601 "rw_ios_per_sec": 0, 00:09:41.601 "rw_mbytes_per_sec": 0, 00:09:41.601 "r_mbytes_per_sec": 0, 00:09:41.601 "w_mbytes_per_sec": 0 00:09:41.601 }, 00:09:41.601 "claimed": false, 00:09:41.601 "zoned": false, 00:09:41.601 "supported_io_types": { 00:09:41.601 "read": true, 00:09:41.601 "write": true, 00:09:41.601 "unmap": false, 00:09:41.601 "flush": false, 00:09:41.601 "reset": true, 00:09:41.601 "nvme_admin": false, 00:09:41.601 "nvme_io": false, 00:09:41.601 "nvme_io_md": false, 00:09:41.601 "write_zeroes": true, 00:09:41.601 "zcopy": false, 00:09:41.601 "get_zone_info": false, 00:09:41.601 "zone_management": false, 00:09:41.601 "zone_append": false, 00:09:41.601 "compare": false, 00:09:41.601 "compare_and_write": false, 00:09:41.601 "abort": false, 00:09:41.601 "seek_hole": false, 00:09:41.601 "seek_data": false, 00:09:41.601 "copy": false, 00:09:41.601 "nvme_iov_md": false 00:09:41.601 }, 00:09:41.601 "memory_domains": [ 00:09:41.601 { 00:09:41.601 "dma_device_id": "system", 00:09:41.601 "dma_device_type": 1 00:09:41.601 }, 00:09:41.601 { 00:09:41.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.601 "dma_device_type": 2 00:09:41.601 }, 00:09:41.601 { 00:09:41.601 "dma_device_id": "system", 00:09:41.601 "dma_device_type": 1 00:09:41.601 }, 00:09:41.601 { 00:09:41.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.601 "dma_device_type": 2 00:09:41.601 } 00:09:41.601 ], 00:09:41.601 "driver_specific": { 00:09:41.601 "raid": { 00:09:41.601 "uuid": "86816652-431b-4d17-b9fe-68c348216bcb", 00:09:41.601 "strip_size_kb": 0, 00:09:41.601 "state": "online", 00:09:41.601 "raid_level": "raid1", 00:09:41.601 "superblock": true, 00:09:41.601 "num_base_bdevs": 2, 00:09:41.601 "num_base_bdevs_discovered": 2, 00:09:41.601 "num_base_bdevs_operational": 2, 00:09:41.601 "base_bdevs_list": [ 00:09:41.601 { 00:09:41.601 "name": "BaseBdev1", 00:09:41.602 "uuid": "cffc002a-d56e-4875-8aab-4fdd6a77aa52", 00:09:41.602 "is_configured": true, 00:09:41.602 "data_offset": 2048, 00:09:41.602 "data_size": 63488 00:09:41.602 }, 00:09:41.602 { 00:09:41.602 "name": "BaseBdev2", 00:09:41.602 "uuid": "7de8d13e-3563-4048-a1f7-2ebc6ef52f7e", 00:09:41.602 "is_configured": true, 00:09:41.602 "data_offset": 2048, 00:09:41.602 "data_size": 63488 00:09:41.602 } 00:09:41.602 ] 00:09:41.602 } 00:09:41.602 } 00:09:41.602 }' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.602 BaseBdev2' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.602 16:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.602 [2024-10-08 16:17:34.906250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.860 "name": "Existed_Raid", 00:09:41.860 "uuid": "86816652-431b-4d17-b9fe-68c348216bcb", 00:09:41.860 "strip_size_kb": 0, 00:09:41.860 "state": "online", 00:09:41.860 "raid_level": "raid1", 00:09:41.860 "superblock": true, 00:09:41.860 "num_base_bdevs": 2, 00:09:41.860 "num_base_bdevs_discovered": 1, 00:09:41.860 "num_base_bdevs_operational": 1, 00:09:41.860 "base_bdevs_list": [ 00:09:41.860 { 00:09:41.860 "name": null, 00:09:41.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.860 "is_configured": false, 00:09:41.860 "data_offset": 0, 00:09:41.860 "data_size": 63488 00:09:41.860 }, 00:09:41.860 { 00:09:41.860 "name": "BaseBdev2", 00:09:41.860 "uuid": "7de8d13e-3563-4048-a1f7-2ebc6ef52f7e", 00:09:41.860 "is_configured": true, 00:09:41.860 "data_offset": 2048, 00:09:41.860 "data_size": 63488 00:09:41.860 } 00:09:41.860 ] 00:09:41.860 }' 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.860 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.427 [2024-10-08 16:17:35.581848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.427 [2024-10-08 16:17:35.582007] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.427 [2024-10-08 16:17:35.675600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.427 [2024-10-08 16:17:35.675831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.427 [2024-10-08 16:17:35.675982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.427 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63212 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63212 ']' 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63212 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.428 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63212 00:09:42.686 killing process with pid 63212 00:09:42.686 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.686 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.686 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63212' 00:09:42.686 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63212 00:09:42.686 [2024-10-08 16:17:35.763101] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.686 16:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63212 00:09:42.686 [2024-10-08 16:17:35.778483] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.061 16:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.061 00:09:44.061 real 0m5.689s 00:09:44.061 user 0m8.303s 00:09:44.061 sys 0m0.833s 00:09:44.061 16:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.061 16:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.061 ************************************ 00:09:44.061 END TEST raid_state_function_test_sb 00:09:44.061 ************************************ 00:09:44.061 16:17:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:44.061 16:17:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:44.061 16:17:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.061 16:17:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.061 ************************************ 00:09:44.061 START TEST raid_superblock_test 00:09:44.061 ************************************ 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:44.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63464 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63464 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63464 ']' 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.061 16:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.061 [2024-10-08 16:17:37.278831] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:44.061 [2024-10-08 16:17:37.279348] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:09:44.320 [2024-10-08 16:17:37.466990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.595 [2024-10-08 16:17:37.708388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.856 [2024-10-08 16:17:37.911148] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.856 [2024-10-08 16:17:37.911449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.117 malloc1 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.117 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.118 [2024-10-08 16:17:38.283020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.118 [2024-10-08 16:17:38.283366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.118 [2024-10-08 16:17:38.283458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:45.118 [2024-10-08 16:17:38.283625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.118 [2024-10-08 16:17:38.286563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.118 [2024-10-08 16:17:38.286739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.118 pt1 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.118 malloc2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.118 [2024-10-08 16:17:38.359091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.118 [2024-10-08 16:17:38.359191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.118 [2024-10-08 16:17:38.359229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:45.118 [2024-10-08 16:17:38.359248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.118 [2024-10-08 16:17:38.362119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.118 [2024-10-08 16:17:38.362385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.118 pt2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.118 [2024-10-08 16:17:38.367304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.118 [2024-10-08 16:17:38.369826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.118 [2024-10-08 16:17:38.370075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:45.118 [2024-10-08 16:17:38.370097] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.118 [2024-10-08 16:17:38.370426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.118 [2024-10-08 16:17:38.370688] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:45.118 [2024-10-08 16:17:38.370716] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:45.118 [2024-10-08 16:17:38.370918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.118 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.119 "name": "raid_bdev1", 00:09:45.119 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:45.119 "strip_size_kb": 0, 00:09:45.119 "state": "online", 00:09:45.119 "raid_level": "raid1", 00:09:45.119 "superblock": true, 00:09:45.119 "num_base_bdevs": 2, 00:09:45.119 "num_base_bdevs_discovered": 2, 00:09:45.119 "num_base_bdevs_operational": 2, 00:09:45.119 "base_bdevs_list": [ 00:09:45.119 { 00:09:45.119 "name": "pt1", 00:09:45.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.119 "is_configured": true, 00:09:45.119 "data_offset": 2048, 00:09:45.119 "data_size": 63488 00:09:45.119 }, 00:09:45.119 { 00:09:45.119 "name": "pt2", 00:09:45.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.119 "is_configured": true, 00:09:45.119 "data_offset": 2048, 00:09:45.119 "data_size": 63488 00:09:45.119 } 00:09:45.119 ] 00:09:45.119 }' 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.119 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.686 [2024-10-08 16:17:38.907779] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.686 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.686 "name": "raid_bdev1", 00:09:45.686 "aliases": [ 00:09:45.686 "e1e15e90-7223-4844-b92a-41f7c91e3d32" 00:09:45.686 ], 00:09:45.686 "product_name": "Raid Volume", 00:09:45.686 "block_size": 512, 00:09:45.686 "num_blocks": 63488, 00:09:45.686 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:45.686 "assigned_rate_limits": { 00:09:45.686 "rw_ios_per_sec": 0, 00:09:45.686 "rw_mbytes_per_sec": 0, 00:09:45.686 "r_mbytes_per_sec": 0, 00:09:45.687 "w_mbytes_per_sec": 0 00:09:45.687 }, 00:09:45.687 "claimed": false, 00:09:45.687 "zoned": false, 00:09:45.687 "supported_io_types": { 00:09:45.687 "read": true, 00:09:45.687 "write": true, 00:09:45.687 "unmap": false, 00:09:45.687 "flush": false, 00:09:45.687 "reset": true, 00:09:45.687 "nvme_admin": false, 00:09:45.687 "nvme_io": false, 00:09:45.687 "nvme_io_md": false, 00:09:45.687 "write_zeroes": true, 00:09:45.687 "zcopy": false, 00:09:45.687 "get_zone_info": false, 00:09:45.687 "zone_management": false, 00:09:45.687 "zone_append": false, 00:09:45.687 "compare": false, 00:09:45.687 "compare_and_write": false, 00:09:45.687 "abort": false, 00:09:45.687 "seek_hole": false, 00:09:45.687 "seek_data": false, 00:09:45.687 "copy": false, 00:09:45.687 "nvme_iov_md": false 00:09:45.687 }, 00:09:45.687 "memory_domains": [ 00:09:45.687 { 00:09:45.687 "dma_device_id": "system", 00:09:45.687 "dma_device_type": 1 00:09:45.687 }, 00:09:45.687 { 00:09:45.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.687 "dma_device_type": 2 00:09:45.687 }, 00:09:45.687 { 00:09:45.687 "dma_device_id": "system", 00:09:45.687 "dma_device_type": 1 00:09:45.687 }, 00:09:45.687 { 00:09:45.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.687 "dma_device_type": 2 00:09:45.687 } 00:09:45.687 ], 00:09:45.687 "driver_specific": { 00:09:45.687 "raid": { 00:09:45.687 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:45.687 "strip_size_kb": 0, 00:09:45.687 "state": "online", 00:09:45.687 "raid_level": "raid1", 00:09:45.687 "superblock": true, 00:09:45.687 "num_base_bdevs": 2, 00:09:45.687 "num_base_bdevs_discovered": 2, 00:09:45.687 "num_base_bdevs_operational": 2, 00:09:45.687 "base_bdevs_list": [ 00:09:45.687 { 00:09:45.687 "name": "pt1", 00:09:45.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.687 "is_configured": true, 00:09:45.687 "data_offset": 2048, 00:09:45.687 "data_size": 63488 00:09:45.687 }, 00:09:45.687 { 00:09:45.687 "name": "pt2", 00:09:45.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.687 "is_configured": true, 00:09:45.687 "data_offset": 2048, 00:09:45.687 "data_size": 63488 00:09:45.687 } 00:09:45.687 ] 00:09:45.687 } 00:09:45.687 } 00:09:45.687 }' 00:09:45.687 16:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.687 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.687 pt2' 00:09:45.687 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.945 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.946 [2024-10-08 16:17:39.175759] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e1e15e90-7223-4844-b92a-41f7c91e3d32 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e1e15e90-7223-4844-b92a-41f7c91e3d32 ']' 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.946 [2024-10-08 16:17:39.239566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.946 [2024-10-08 16:17:39.239639] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.946 [2024-10-08 16:17:39.239761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.946 [2024-10-08 16:17:39.239851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.946 [2024-10-08 16:17:39.239876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.946 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.204 [2024-10-08 16:17:39.375564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:46.204 [2024-10-08 16:17:39.378085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:46.204 [2024-10-08 16:17:39.378399] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:46.204 [2024-10-08 16:17:39.378496] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:46.204 [2024-10-08 16:17:39.378553] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.204 [2024-10-08 16:17:39.378577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:46.204 request: 00:09:46.204 { 00:09:46.204 "name": "raid_bdev1", 00:09:46.204 "raid_level": "raid1", 00:09:46.204 "base_bdevs": [ 00:09:46.204 "malloc1", 00:09:46.204 "malloc2" 00:09:46.204 ], 00:09:46.204 "superblock": false, 00:09:46.204 "method": "bdev_raid_create", 00:09:46.204 "req_id": 1 00:09:46.204 } 00:09:46.204 Got JSON-RPC error response 00:09:46.204 response: 00:09:46.204 { 00:09:46.204 "code": -17, 00:09:46.204 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:46.204 } 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:46.204 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.205 [2024-10-08 16:17:39.443564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.205 [2024-10-08 16:17:39.443868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.205 [2024-10-08 16:17:39.443948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:46.205 [2024-10-08 16:17:39.444080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.205 [2024-10-08 16:17:39.447139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.205 [2024-10-08 16:17:39.447327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.205 [2024-10-08 16:17:39.447585] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:46.205 [2024-10-08 16:17:39.447798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.205 pt1 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.205 "name": "raid_bdev1", 00:09:46.205 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:46.205 "strip_size_kb": 0, 00:09:46.205 "state": "configuring", 00:09:46.205 "raid_level": "raid1", 00:09:46.205 "superblock": true, 00:09:46.205 "num_base_bdevs": 2, 00:09:46.205 "num_base_bdevs_discovered": 1, 00:09:46.205 "num_base_bdevs_operational": 2, 00:09:46.205 "base_bdevs_list": [ 00:09:46.205 { 00:09:46.205 "name": "pt1", 00:09:46.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.205 "is_configured": true, 00:09:46.205 "data_offset": 2048, 00:09:46.205 "data_size": 63488 00:09:46.205 }, 00:09:46.205 { 00:09:46.205 "name": null, 00:09:46.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.205 "is_configured": false, 00:09:46.205 "data_offset": 2048, 00:09:46.205 "data_size": 63488 00:09:46.205 } 00:09:46.205 ] 00:09:46.205 }' 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.205 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.775 [2024-10-08 16:17:39.971879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.775 [2024-10-08 16:17:39.972031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.775 [2024-10-08 16:17:39.972070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:46.775 [2024-10-08 16:17:39.972093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.775 [2024-10-08 16:17:39.972789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.775 [2024-10-08 16:17:39.972833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.775 [2024-10-08 16:17:39.972999] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.775 [2024-10-08 16:17:39.973045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.775 [2024-10-08 16:17:39.973238] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.775 [2024-10-08 16:17:39.973263] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.775 [2024-10-08 16:17:39.973586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.775 [2024-10-08 16:17:39.973814] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.775 [2024-10-08 16:17:39.973833] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:46.775 [2024-10-08 16:17:39.974017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.775 pt2 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.775 16:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.775 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.775 "name": "raid_bdev1", 00:09:46.775 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:46.776 "strip_size_kb": 0, 00:09:46.776 "state": "online", 00:09:46.776 "raid_level": "raid1", 00:09:46.776 "superblock": true, 00:09:46.776 "num_base_bdevs": 2, 00:09:46.776 "num_base_bdevs_discovered": 2, 00:09:46.776 "num_base_bdevs_operational": 2, 00:09:46.776 "base_bdevs_list": [ 00:09:46.776 { 00:09:46.776 "name": "pt1", 00:09:46.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.776 "is_configured": true, 00:09:46.776 "data_offset": 2048, 00:09:46.776 "data_size": 63488 00:09:46.776 }, 00:09:46.776 { 00:09:46.776 "name": "pt2", 00:09:46.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.776 "is_configured": true, 00:09:46.776 "data_offset": 2048, 00:09:46.776 "data_size": 63488 00:09:46.776 } 00:09:46.776 ] 00:09:46.776 }' 00:09:46.776 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.776 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.345 [2024-10-08 16:17:40.508317] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.345 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.345 "name": "raid_bdev1", 00:09:47.345 "aliases": [ 00:09:47.345 "e1e15e90-7223-4844-b92a-41f7c91e3d32" 00:09:47.345 ], 00:09:47.345 "product_name": "Raid Volume", 00:09:47.345 "block_size": 512, 00:09:47.345 "num_blocks": 63488, 00:09:47.345 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:47.345 "assigned_rate_limits": { 00:09:47.345 "rw_ios_per_sec": 0, 00:09:47.345 "rw_mbytes_per_sec": 0, 00:09:47.345 "r_mbytes_per_sec": 0, 00:09:47.345 "w_mbytes_per_sec": 0 00:09:47.345 }, 00:09:47.345 "claimed": false, 00:09:47.345 "zoned": false, 00:09:47.345 "supported_io_types": { 00:09:47.345 "read": true, 00:09:47.345 "write": true, 00:09:47.345 "unmap": false, 00:09:47.345 "flush": false, 00:09:47.345 "reset": true, 00:09:47.345 "nvme_admin": false, 00:09:47.345 "nvme_io": false, 00:09:47.345 "nvme_io_md": false, 00:09:47.345 "write_zeroes": true, 00:09:47.345 "zcopy": false, 00:09:47.345 "get_zone_info": false, 00:09:47.345 "zone_management": false, 00:09:47.345 "zone_append": false, 00:09:47.345 "compare": false, 00:09:47.345 "compare_and_write": false, 00:09:47.345 "abort": false, 00:09:47.345 "seek_hole": false, 00:09:47.345 "seek_data": false, 00:09:47.345 "copy": false, 00:09:47.345 "nvme_iov_md": false 00:09:47.345 }, 00:09:47.345 "memory_domains": [ 00:09:47.345 { 00:09:47.345 "dma_device_id": "system", 00:09:47.345 "dma_device_type": 1 00:09:47.345 }, 00:09:47.345 { 00:09:47.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.345 "dma_device_type": 2 00:09:47.345 }, 00:09:47.345 { 00:09:47.345 "dma_device_id": "system", 00:09:47.345 "dma_device_type": 1 00:09:47.345 }, 00:09:47.345 { 00:09:47.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.345 "dma_device_type": 2 00:09:47.345 } 00:09:47.345 ], 00:09:47.345 "driver_specific": { 00:09:47.345 "raid": { 00:09:47.345 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:47.345 "strip_size_kb": 0, 00:09:47.345 "state": "online", 00:09:47.345 "raid_level": "raid1", 00:09:47.345 "superblock": true, 00:09:47.345 "num_base_bdevs": 2, 00:09:47.345 "num_base_bdevs_discovered": 2, 00:09:47.345 "num_base_bdevs_operational": 2, 00:09:47.345 "base_bdevs_list": [ 00:09:47.345 { 00:09:47.345 "name": "pt1", 00:09:47.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.345 "is_configured": true, 00:09:47.345 "data_offset": 2048, 00:09:47.345 "data_size": 63488 00:09:47.345 }, 00:09:47.345 { 00:09:47.345 "name": "pt2", 00:09:47.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.345 "is_configured": true, 00:09:47.345 "data_offset": 2048, 00:09:47.345 "data_size": 63488 00:09:47.345 } 00:09:47.345 ] 00:09:47.345 } 00:09:47.345 } 00:09:47.345 }' 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.346 pt2' 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.346 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.604 [2024-10-08 16:17:40.776422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e1e15e90-7223-4844-b92a-41f7c91e3d32 '!=' e1e15e90-7223-4844-b92a-41f7c91e3d32 ']' 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.604 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.604 [2024-10-08 16:17:40.824145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.605 "name": "raid_bdev1", 00:09:47.605 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:47.605 "strip_size_kb": 0, 00:09:47.605 "state": "online", 00:09:47.605 "raid_level": "raid1", 00:09:47.605 "superblock": true, 00:09:47.605 "num_base_bdevs": 2, 00:09:47.605 "num_base_bdevs_discovered": 1, 00:09:47.605 "num_base_bdevs_operational": 1, 00:09:47.605 "base_bdevs_list": [ 00:09:47.605 { 00:09:47.605 "name": null, 00:09:47.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.605 "is_configured": false, 00:09:47.605 "data_offset": 0, 00:09:47.605 "data_size": 63488 00:09:47.605 }, 00:09:47.605 { 00:09:47.605 "name": "pt2", 00:09:47.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.605 "is_configured": true, 00:09:47.605 "data_offset": 2048, 00:09:47.605 "data_size": 63488 00:09:47.605 } 00:09:47.605 ] 00:09:47.605 }' 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.605 16:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 [2024-10-08 16:17:41.320421] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.171 [2024-10-08 16:17:41.320485] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.171 [2024-10-08 16:17:41.320616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.171 [2024-10-08 16:17:41.320690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.171 [2024-10-08 16:17:41.320714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 [2024-10-08 16:17:41.396394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.171 [2024-10-08 16:17:41.396512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.171 [2024-10-08 16:17:41.396556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.171 [2024-10-08 16:17:41.396579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.172 [2024-10-08 16:17:41.399789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.172 [2024-10-08 16:17:41.399846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.172 [2024-10-08 16:17:41.399956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.172 [2024-10-08 16:17:41.400028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.172 [2024-10-08 16:17:41.400192] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.172 [2024-10-08 16:17:41.400219] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.172 [2024-10-08 16:17:41.400532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:48.172 [2024-10-08 16:17:41.400757] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.172 [2024-10-08 16:17:41.400775] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.172 [2024-10-08 16:17:41.401060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.172 pt2 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.172 "name": "raid_bdev1", 00:09:48.172 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:48.172 "strip_size_kb": 0, 00:09:48.172 "state": "online", 00:09:48.172 "raid_level": "raid1", 00:09:48.172 "superblock": true, 00:09:48.172 "num_base_bdevs": 2, 00:09:48.172 "num_base_bdevs_discovered": 1, 00:09:48.172 "num_base_bdevs_operational": 1, 00:09:48.172 "base_bdevs_list": [ 00:09:48.172 { 00:09:48.172 "name": null, 00:09:48.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.172 "is_configured": false, 00:09:48.172 "data_offset": 2048, 00:09:48.172 "data_size": 63488 00:09:48.172 }, 00:09:48.172 { 00:09:48.172 "name": "pt2", 00:09:48.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.172 "is_configured": true, 00:09:48.172 "data_offset": 2048, 00:09:48.172 "data_size": 63488 00:09:48.172 } 00:09:48.172 ] 00:09:48.172 }' 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.172 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.739 [2024-10-08 16:17:41.961163] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.739 [2024-10-08 16:17:41.961541] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.739 [2024-10-08 16:17:41.961686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.739 [2024-10-08 16:17:41.961774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.739 [2024-10-08 16:17:41.961794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:48.739 16:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.739 [2024-10-08 16:17:42.021217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.739 [2024-10-08 16:17:42.021340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.739 [2024-10-08 16:17:42.021378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:48.739 [2024-10-08 16:17:42.021398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.739 [2024-10-08 16:17:42.024313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.739 [2024-10-08 16:17:42.024363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.739 [2024-10-08 16:17:42.024496] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.739 [2024-10-08 16:17:42.024580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.739 [2024-10-08 16:17:42.024756] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:48.739 [2024-10-08 16:17:42.024776] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.739 [2024-10-08 16:17:42.024807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:48.739 [2024-10-08 16:17:42.024922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.739 [2024-10-08 16:17:42.025047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:48.739 [2024-10-08 16:17:42.025065] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.739 [2024-10-08 16:17:42.025377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:48.739 [2024-10-08 16:17:42.025598] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:48.739 [2024-10-08 16:17:42.025622] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:48.739 [2024-10-08 16:17:42.025873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.739 pt1 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.739 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.740 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.998 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.998 "name": "raid_bdev1", 00:09:48.998 "uuid": "e1e15e90-7223-4844-b92a-41f7c91e3d32", 00:09:48.998 "strip_size_kb": 0, 00:09:48.998 "state": "online", 00:09:48.998 "raid_level": "raid1", 00:09:48.998 "superblock": true, 00:09:48.998 "num_base_bdevs": 2, 00:09:48.998 "num_base_bdevs_discovered": 1, 00:09:48.998 "num_base_bdevs_operational": 1, 00:09:48.998 "base_bdevs_list": [ 00:09:48.998 { 00:09:48.998 "name": null, 00:09:48.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.998 "is_configured": false, 00:09:48.998 "data_offset": 2048, 00:09:48.998 "data_size": 63488 00:09:48.998 }, 00:09:48.998 { 00:09:48.998 "name": "pt2", 00:09:48.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.998 "is_configured": true, 00:09:48.998 "data_offset": 2048, 00:09:48.998 "data_size": 63488 00:09:48.998 } 00:09:48.998 ] 00:09:48.998 }' 00:09:48.998 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.998 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.575 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.576 [2024-10-08 16:17:42.669629] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e1e15e90-7223-4844-b92a-41f7c91e3d32 '!=' e1e15e90-7223-4844-b92a-41f7c91e3d32 ']' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63464 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63464 ']' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63464 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63464 00:09:49.576 killing process with pid 63464 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63464' 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63464 00:09:49.576 16:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63464 00:09:49.576 [2024-10-08 16:17:42.745170] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.576 [2024-10-08 16:17:42.745360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.576 [2024-10-08 16:17:42.745466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.576 [2024-10-08 16:17:42.745506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:49.851 [2024-10-08 16:17:42.924401] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.224 16:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.224 00:09:51.224 real 0m6.961s 00:09:51.224 user 0m10.890s 00:09:51.224 sys 0m0.996s 00:09:51.224 16:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.224 16:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.224 ************************************ 00:09:51.224 END TEST raid_superblock_test 00:09:51.224 ************************************ 00:09:51.224 16:17:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:51.225 16:17:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.225 16:17:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.225 16:17:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.225 ************************************ 00:09:51.225 START TEST raid_read_error_test 00:09:51.225 ************************************ 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PnGKYsLbRn 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63800 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63800 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63800 ']' 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.225 16:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.225 [2024-10-08 16:17:44.287138] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:51.225 [2024-10-08 16:17:44.287325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63800 ] 00:09:51.225 [2024-10-08 16:17:44.454440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.483 [2024-10-08 16:17:44.707825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.741 [2024-10-08 16:17:44.913007] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.741 [2024-10-08 16:17:44.913136] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 BaseBdev1_malloc 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 true 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 [2024-10-08 16:17:45.404794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.309 [2024-10-08 16:17:45.404913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.309 [2024-10-08 16:17:45.404947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.309 [2024-10-08 16:17:45.404971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.309 [2024-10-08 16:17:45.407899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.309 [2024-10-08 16:17:45.407956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.309 BaseBdev1 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 BaseBdev2_malloc 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 true 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 [2024-10-08 16:17:45.478704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.309 [2024-10-08 16:17:45.478821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.309 [2024-10-08 16:17:45.478870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.309 [2024-10-08 16:17:45.478894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.309 [2024-10-08 16:17:45.481818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.309 [2024-10-08 16:17:45.481886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.309 BaseBdev2 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 [2024-10-08 16:17:45.486868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.309 [2024-10-08 16:17:45.489376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.309 [2024-10-08 16:17:45.489702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.309 [2024-10-08 16:17:45.489740] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.309 [2024-10-08 16:17:45.490071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:52.309 [2024-10-08 16:17:45.490323] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.309 [2024-10-08 16:17:45.490343] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:52.309 [2024-10-08 16:17:45.490587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.309 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.309 "name": "raid_bdev1", 00:09:52.309 "uuid": "fd6cdc91-6e3d-4dcc-aa59-a28f1b49b06d", 00:09:52.310 "strip_size_kb": 0, 00:09:52.310 "state": "online", 00:09:52.310 "raid_level": "raid1", 00:09:52.310 "superblock": true, 00:09:52.310 "num_base_bdevs": 2, 00:09:52.310 "num_base_bdevs_discovered": 2, 00:09:52.310 "num_base_bdevs_operational": 2, 00:09:52.310 "base_bdevs_list": [ 00:09:52.310 { 00:09:52.310 "name": "BaseBdev1", 00:09:52.310 "uuid": "0f845b9b-bc41-5624-adff-93796cca5121", 00:09:52.310 "is_configured": true, 00:09:52.310 "data_offset": 2048, 00:09:52.310 "data_size": 63488 00:09:52.310 }, 00:09:52.310 { 00:09:52.310 "name": "BaseBdev2", 00:09:52.310 "uuid": "a29432b8-eb5c-5ddc-995c-f4f112f2397a", 00:09:52.310 "is_configured": true, 00:09:52.310 "data_offset": 2048, 00:09:52.310 "data_size": 63488 00:09:52.310 } 00:09:52.310 ] 00:09:52.310 }' 00:09:52.310 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.310 16:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.876 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.876 16:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.876 [2024-10-08 16:17:46.100535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:53.810 16:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.810 16:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.810 16:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.810 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.810 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.810 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.811 "name": "raid_bdev1", 00:09:53.811 "uuid": "fd6cdc91-6e3d-4dcc-aa59-a28f1b49b06d", 00:09:53.811 "strip_size_kb": 0, 00:09:53.811 "state": "online", 00:09:53.811 "raid_level": "raid1", 00:09:53.811 "superblock": true, 00:09:53.811 "num_base_bdevs": 2, 00:09:53.811 "num_base_bdevs_discovered": 2, 00:09:53.811 "num_base_bdevs_operational": 2, 00:09:53.811 "base_bdevs_list": [ 00:09:53.811 { 00:09:53.811 "name": "BaseBdev1", 00:09:53.811 "uuid": "0f845b9b-bc41-5624-adff-93796cca5121", 00:09:53.811 "is_configured": true, 00:09:53.811 "data_offset": 2048, 00:09:53.811 "data_size": 63488 00:09:53.811 }, 00:09:53.811 { 00:09:53.811 "name": "BaseBdev2", 00:09:53.811 "uuid": "a29432b8-eb5c-5ddc-995c-f4f112f2397a", 00:09:53.811 "is_configured": true, 00:09:53.811 "data_offset": 2048, 00:09:53.811 "data_size": 63488 00:09:53.811 } 00:09:53.811 ] 00:09:53.811 }' 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.811 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.377 [2024-10-08 16:17:47.552965] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.377 [2024-10-08 16:17:47.553270] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.377 [2024-10-08 16:17:47.556916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.377 [2024-10-08 16:17:47.557179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.377 { 00:09:54.377 "results": [ 00:09:54.377 { 00:09:54.377 "job": "raid_bdev1", 00:09:54.377 "core_mask": "0x1", 00:09:54.377 "workload": "randrw", 00:09:54.377 "percentage": 50, 00:09:54.377 "status": "finished", 00:09:54.377 "queue_depth": 1, 00:09:54.377 "io_size": 131072, 00:09:54.377 "runtime": 1.450183, 00:09:54.377 "iops": 11432.350262001417, 00:09:54.377 "mibps": 1429.0437827501771, 00:09:54.377 "io_failed": 0, 00:09:54.377 "io_timeout": 0, 00:09:54.377 "avg_latency_us": 82.76996901885737, 00:09:54.377 "min_latency_us": 42.35636363636364, 00:09:54.377 "max_latency_us": 1787.3454545454545 00:09:54.377 } 00:09:54.377 ], 00:09:54.377 "core_count": 1 00:09:54.377 } 00:09:54.377 [2024-10-08 16:17:47.557440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.377 [2024-10-08 16:17:47.557480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63800 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63800 ']' 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63800 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63800 00:09:54.377 killing process with pid 63800 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63800' 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63800 00:09:54.377 [2024-10-08 16:17:47.594942] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.377 16:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63800 00:09:54.636 [2024-10-08 16:17:47.716926] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PnGKYsLbRn 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:56.015 00:09:56.015 real 0m4.817s 00:09:56.015 user 0m5.968s 00:09:56.015 sys 0m0.593s 00:09:56.015 ************************************ 00:09:56.015 END TEST raid_read_error_test 00:09:56.015 ************************************ 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.015 16:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.015 16:17:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:56.015 16:17:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:56.015 16:17:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.015 16:17:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.015 ************************************ 00:09:56.015 START TEST raid_write_error_test 00:09:56.015 ************************************ 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6SWLyI0fGU 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63951 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63951 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63951 ']' 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.015 16:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.015 [2024-10-08 16:17:49.184603] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:56.015 [2024-10-08 16:17:49.184810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63951 ] 00:09:56.273 [2024-10-08 16:17:49.357332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.531 [2024-10-08 16:17:49.597337] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.531 [2024-10-08 16:17:49.801493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.531 [2024-10-08 16:17:49.801579] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.097 BaseBdev1_malloc 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.097 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 true 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 [2024-10-08 16:17:50.210142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.098 [2024-10-08 16:17:50.210216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.098 [2024-10-08 16:17:50.210245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.098 [2024-10-08 16:17:50.210265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.098 [2024-10-08 16:17:50.212983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.098 [2024-10-08 16:17:50.213050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.098 BaseBdev1 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 BaseBdev2_malloc 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 true 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 [2024-10-08 16:17:50.281492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.098 [2024-10-08 16:17:50.281592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.098 [2024-10-08 16:17:50.281626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:57.098 [2024-10-08 16:17:50.281649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.098 [2024-10-08 16:17:50.284777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.098 [2024-10-08 16:17:50.284842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.098 BaseBdev2 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 [2024-10-08 16:17:50.289608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.098 [2024-10-08 16:17:50.292186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.098 [2024-10-08 16:17:50.292452] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.098 [2024-10-08 16:17:50.292479] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.098 [2024-10-08 16:17:50.292868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:57.098 [2024-10-08 16:17:50.293126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.098 [2024-10-08 16:17:50.293146] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:57.098 [2024-10-08 16:17:50.293361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.098 "name": "raid_bdev1", 00:09:57.098 "uuid": "c420dc3d-6072-4705-ba85-a09b3f0e531f", 00:09:57.098 "strip_size_kb": 0, 00:09:57.098 "state": "online", 00:09:57.098 "raid_level": "raid1", 00:09:57.098 "superblock": true, 00:09:57.098 "num_base_bdevs": 2, 00:09:57.098 "num_base_bdevs_discovered": 2, 00:09:57.098 "num_base_bdevs_operational": 2, 00:09:57.098 "base_bdevs_list": [ 00:09:57.098 { 00:09:57.098 "name": "BaseBdev1", 00:09:57.098 "uuid": "f1eb07e2-dab3-52e4-895b-a1244683d9d0", 00:09:57.098 "is_configured": true, 00:09:57.098 "data_offset": 2048, 00:09:57.098 "data_size": 63488 00:09:57.098 }, 00:09:57.098 { 00:09:57.098 "name": "BaseBdev2", 00:09:57.098 "uuid": "f12f2fb7-206c-5eff-869b-759e6f95d21f", 00:09:57.098 "is_configured": true, 00:09:57.098 "data_offset": 2048, 00:09:57.098 "data_size": 63488 00:09:57.098 } 00:09:57.098 ] 00:09:57.098 }' 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.098 16:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.664 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.664 16:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.664 [2024-10-08 16:17:50.867233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.598 [2024-10-08 16:17:51.756488] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:58.598 [2024-10-08 16:17:51.756579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.598 [2024-10-08 16:17:51.756821] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.598 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.598 "name": "raid_bdev1", 00:09:58.598 "uuid": "c420dc3d-6072-4705-ba85-a09b3f0e531f", 00:09:58.599 "strip_size_kb": 0, 00:09:58.599 "state": "online", 00:09:58.599 "raid_level": "raid1", 00:09:58.599 "superblock": true, 00:09:58.599 "num_base_bdevs": 2, 00:09:58.599 "num_base_bdevs_discovered": 1, 00:09:58.599 "num_base_bdevs_operational": 1, 00:09:58.599 "base_bdevs_list": [ 00:09:58.599 { 00:09:58.599 "name": null, 00:09:58.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.599 "is_configured": false, 00:09:58.599 "data_offset": 0, 00:09:58.599 "data_size": 63488 00:09:58.599 }, 00:09:58.599 { 00:09:58.599 "name": "BaseBdev2", 00:09:58.599 "uuid": "f12f2fb7-206c-5eff-869b-759e6f95d21f", 00:09:58.599 "is_configured": true, 00:09:58.599 "data_offset": 2048, 00:09:58.599 "data_size": 63488 00:09:58.599 } 00:09:58.599 ] 00:09:58.599 }' 00:09:58.599 16:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.599 16:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.166 [2024-10-08 16:17:52.296623] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.166 [2024-10-08 16:17:52.296662] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.166 [2024-10-08 16:17:52.299999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.166 [2024-10-08 16:17:52.300055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.166 [2024-10-08 16:17:52.300136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.166 [2024-10-08 16:17:52.300154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:59.166 { 00:09:59.166 "results": [ 00:09:59.166 { 00:09:59.166 "job": "raid_bdev1", 00:09:59.166 "core_mask": "0x1", 00:09:59.166 "workload": "randrw", 00:09:59.166 "percentage": 50, 00:09:59.166 "status": "finished", 00:09:59.166 "queue_depth": 1, 00:09:59.166 "io_size": 131072, 00:09:59.166 "runtime": 1.426885, 00:09:59.166 "iops": 13778.265242118321, 00:09:59.166 "mibps": 1722.2831552647901, 00:09:59.166 "io_failed": 0, 00:09:59.166 "io_timeout": 0, 00:09:59.166 "avg_latency_us": 67.79655599741052, 00:09:59.166 "min_latency_us": 40.72727272727273, 00:09:59.166 "max_latency_us": 1921.3963636363637 00:09:59.166 } 00:09:59.166 ], 00:09:59.166 "core_count": 1 00:09:59.166 } 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63951 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63951 ']' 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63951 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63951 00:09:59.166 killing process with pid 63951 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63951' 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63951 00:09:59.166 [2024-10-08 16:17:52.339184] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.166 16:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63951 00:09:59.166 [2024-10-08 16:17:52.461593] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6SWLyI0fGU 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:00.541 00:10:00.541 real 0m4.641s 00:10:00.541 user 0m5.689s 00:10:00.541 sys 0m0.593s 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.541 ************************************ 00:10:00.541 END TEST raid_write_error_test 00:10:00.541 ************************************ 00:10:00.541 16:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.541 16:17:53 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:00.541 16:17:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:00.541 16:17:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:00.541 16:17:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:00.541 16:17:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.541 16:17:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.541 ************************************ 00:10:00.541 START TEST raid_state_function_test 00:10:00.541 ************************************ 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64095 00:10:00.541 Process raid pid: 64095 00:10:00.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64095' 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64095 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 64095 ']' 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.541 16:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.800 [2024-10-08 16:17:53.871759] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:00.800 [2024-10-08 16:17:53.872193] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.800 [2024-10-08 16:17:54.046926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.059 [2024-10-08 16:17:54.307674] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.318 [2024-10-08 16:17:54.511069] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.318 [2024-10-08 16:17:54.511129] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.576 [2024-10-08 16:17:54.867907] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.576 [2024-10-08 16:17:54.868000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.576 [2024-10-08 16:17:54.868018] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.576 [2024-10-08 16:17:54.868036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.576 [2024-10-08 16:17:54.868047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.576 [2024-10-08 16:17:54.868062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.576 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.834 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.834 "name": "Existed_Raid", 00:10:01.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.834 "strip_size_kb": 64, 00:10:01.834 "state": "configuring", 00:10:01.834 "raid_level": "raid0", 00:10:01.834 "superblock": false, 00:10:01.834 "num_base_bdevs": 3, 00:10:01.834 "num_base_bdevs_discovered": 0, 00:10:01.834 "num_base_bdevs_operational": 3, 00:10:01.834 "base_bdevs_list": [ 00:10:01.834 { 00:10:01.834 "name": "BaseBdev1", 00:10:01.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.834 "is_configured": false, 00:10:01.834 "data_offset": 0, 00:10:01.834 "data_size": 0 00:10:01.834 }, 00:10:01.834 { 00:10:01.834 "name": "BaseBdev2", 00:10:01.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.834 "is_configured": false, 00:10:01.834 "data_offset": 0, 00:10:01.834 "data_size": 0 00:10:01.834 }, 00:10:01.834 { 00:10:01.834 "name": "BaseBdev3", 00:10:01.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.834 "is_configured": false, 00:10:01.834 "data_offset": 0, 00:10:01.834 "data_size": 0 00:10:01.834 } 00:10:01.834 ] 00:10:01.834 }' 00:10:01.834 16:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.834 16:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.092 [2024-10-08 16:17:55.375900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.092 [2024-10-08 16:17:55.375959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.092 [2024-10-08 16:17:55.383925] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.092 [2024-10-08 16:17:55.384121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.092 [2024-10-08 16:17:55.384148] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.092 [2024-10-08 16:17:55.384167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.092 [2024-10-08 16:17:55.384177] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.092 [2024-10-08 16:17:55.384191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.092 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.351 [2024-10-08 16:17:55.444015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.351 BaseBdev1 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.351 [ 00:10:02.351 { 00:10:02.351 "name": "BaseBdev1", 00:10:02.351 "aliases": [ 00:10:02.351 "07772ba4-95df-4a6d-8993-c24764a515ae" 00:10:02.351 ], 00:10:02.351 "product_name": "Malloc disk", 00:10:02.351 "block_size": 512, 00:10:02.351 "num_blocks": 65536, 00:10:02.351 "uuid": "07772ba4-95df-4a6d-8993-c24764a515ae", 00:10:02.351 "assigned_rate_limits": { 00:10:02.351 "rw_ios_per_sec": 0, 00:10:02.351 "rw_mbytes_per_sec": 0, 00:10:02.351 "r_mbytes_per_sec": 0, 00:10:02.351 "w_mbytes_per_sec": 0 00:10:02.351 }, 00:10:02.351 "claimed": true, 00:10:02.351 "claim_type": "exclusive_write", 00:10:02.351 "zoned": false, 00:10:02.351 "supported_io_types": { 00:10:02.351 "read": true, 00:10:02.351 "write": true, 00:10:02.351 "unmap": true, 00:10:02.351 "flush": true, 00:10:02.351 "reset": true, 00:10:02.351 "nvme_admin": false, 00:10:02.351 "nvme_io": false, 00:10:02.351 "nvme_io_md": false, 00:10:02.351 "write_zeroes": true, 00:10:02.351 "zcopy": true, 00:10:02.351 "get_zone_info": false, 00:10:02.351 "zone_management": false, 00:10:02.351 "zone_append": false, 00:10:02.351 "compare": false, 00:10:02.351 "compare_and_write": false, 00:10:02.351 "abort": true, 00:10:02.351 "seek_hole": false, 00:10:02.351 "seek_data": false, 00:10:02.351 "copy": true, 00:10:02.351 "nvme_iov_md": false 00:10:02.351 }, 00:10:02.351 "memory_domains": [ 00:10:02.351 { 00:10:02.351 "dma_device_id": "system", 00:10:02.351 "dma_device_type": 1 00:10:02.351 }, 00:10:02.351 { 00:10:02.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.351 "dma_device_type": 2 00:10:02.351 } 00:10:02.351 ], 00:10:02.351 "driver_specific": {} 00:10:02.351 } 00:10:02.351 ] 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.351 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.351 "name": "Existed_Raid", 00:10:02.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.351 "strip_size_kb": 64, 00:10:02.351 "state": "configuring", 00:10:02.352 "raid_level": "raid0", 00:10:02.352 "superblock": false, 00:10:02.352 "num_base_bdevs": 3, 00:10:02.352 "num_base_bdevs_discovered": 1, 00:10:02.352 "num_base_bdevs_operational": 3, 00:10:02.352 "base_bdevs_list": [ 00:10:02.352 { 00:10:02.352 "name": "BaseBdev1", 00:10:02.352 "uuid": "07772ba4-95df-4a6d-8993-c24764a515ae", 00:10:02.352 "is_configured": true, 00:10:02.352 "data_offset": 0, 00:10:02.352 "data_size": 65536 00:10:02.352 }, 00:10:02.352 { 00:10:02.352 "name": "BaseBdev2", 00:10:02.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.352 "is_configured": false, 00:10:02.352 "data_offset": 0, 00:10:02.352 "data_size": 0 00:10:02.352 }, 00:10:02.352 { 00:10:02.352 "name": "BaseBdev3", 00:10:02.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.352 "is_configured": false, 00:10:02.352 "data_offset": 0, 00:10:02.352 "data_size": 0 00:10:02.352 } 00:10:02.352 ] 00:10:02.352 }' 00:10:02.352 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.352 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.918 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.918 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 [2024-10-08 16:17:55.996262] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.918 [2024-10-08 16:17:55.996329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:02.918 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.918 16:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.918 16:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 [2024-10-08 16:17:56.004260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.918 [2024-10-08 16:17:56.006813] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.918 [2024-10-08 16:17:56.007049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.918 [2024-10-08 16:17:56.007077] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.918 [2024-10-08 16:17:56.007095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.918 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.918 "name": "Existed_Raid", 00:10:02.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.918 "strip_size_kb": 64, 00:10:02.918 "state": "configuring", 00:10:02.918 "raid_level": "raid0", 00:10:02.918 "superblock": false, 00:10:02.918 "num_base_bdevs": 3, 00:10:02.918 "num_base_bdevs_discovered": 1, 00:10:02.918 "num_base_bdevs_operational": 3, 00:10:02.918 "base_bdevs_list": [ 00:10:02.918 { 00:10:02.918 "name": "BaseBdev1", 00:10:02.919 "uuid": "07772ba4-95df-4a6d-8993-c24764a515ae", 00:10:02.919 "is_configured": true, 00:10:02.919 "data_offset": 0, 00:10:02.919 "data_size": 65536 00:10:02.919 }, 00:10:02.919 { 00:10:02.919 "name": "BaseBdev2", 00:10:02.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.919 "is_configured": false, 00:10:02.919 "data_offset": 0, 00:10:02.919 "data_size": 0 00:10:02.919 }, 00:10:02.919 { 00:10:02.919 "name": "BaseBdev3", 00:10:02.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.919 "is_configured": false, 00:10:02.919 "data_offset": 0, 00:10:02.919 "data_size": 0 00:10:02.919 } 00:10:02.919 ] 00:10:02.919 }' 00:10:02.919 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.919 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.549 [2024-10-08 16:17:56.610946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.549 BaseBdev2 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.549 [ 00:10:03.549 { 00:10:03.549 "name": "BaseBdev2", 00:10:03.549 "aliases": [ 00:10:03.549 "c8889d81-90c9-40bf-bed9-d35d909b664b" 00:10:03.549 ], 00:10:03.549 "product_name": "Malloc disk", 00:10:03.549 "block_size": 512, 00:10:03.549 "num_blocks": 65536, 00:10:03.549 "uuid": "c8889d81-90c9-40bf-bed9-d35d909b664b", 00:10:03.549 "assigned_rate_limits": { 00:10:03.549 "rw_ios_per_sec": 0, 00:10:03.549 "rw_mbytes_per_sec": 0, 00:10:03.549 "r_mbytes_per_sec": 0, 00:10:03.549 "w_mbytes_per_sec": 0 00:10:03.549 }, 00:10:03.549 "claimed": true, 00:10:03.549 "claim_type": "exclusive_write", 00:10:03.549 "zoned": false, 00:10:03.549 "supported_io_types": { 00:10:03.549 "read": true, 00:10:03.549 "write": true, 00:10:03.549 "unmap": true, 00:10:03.549 "flush": true, 00:10:03.549 "reset": true, 00:10:03.549 "nvme_admin": false, 00:10:03.549 "nvme_io": false, 00:10:03.549 "nvme_io_md": false, 00:10:03.549 "write_zeroes": true, 00:10:03.549 "zcopy": true, 00:10:03.549 "get_zone_info": false, 00:10:03.549 "zone_management": false, 00:10:03.549 "zone_append": false, 00:10:03.549 "compare": false, 00:10:03.549 "compare_and_write": false, 00:10:03.549 "abort": true, 00:10:03.549 "seek_hole": false, 00:10:03.549 "seek_data": false, 00:10:03.549 "copy": true, 00:10:03.549 "nvme_iov_md": false 00:10:03.549 }, 00:10:03.549 "memory_domains": [ 00:10:03.549 { 00:10:03.549 "dma_device_id": "system", 00:10:03.549 "dma_device_type": 1 00:10:03.549 }, 00:10:03.549 { 00:10:03.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.549 "dma_device_type": 2 00:10:03.549 } 00:10:03.549 ], 00:10:03.549 "driver_specific": {} 00:10:03.549 } 00:10:03.549 ] 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.549 "name": "Existed_Raid", 00:10:03.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.549 "strip_size_kb": 64, 00:10:03.549 "state": "configuring", 00:10:03.549 "raid_level": "raid0", 00:10:03.549 "superblock": false, 00:10:03.549 "num_base_bdevs": 3, 00:10:03.549 "num_base_bdevs_discovered": 2, 00:10:03.549 "num_base_bdevs_operational": 3, 00:10:03.549 "base_bdevs_list": [ 00:10:03.549 { 00:10:03.549 "name": "BaseBdev1", 00:10:03.549 "uuid": "07772ba4-95df-4a6d-8993-c24764a515ae", 00:10:03.549 "is_configured": true, 00:10:03.549 "data_offset": 0, 00:10:03.549 "data_size": 65536 00:10:03.549 }, 00:10:03.549 { 00:10:03.549 "name": "BaseBdev2", 00:10:03.549 "uuid": "c8889d81-90c9-40bf-bed9-d35d909b664b", 00:10:03.549 "is_configured": true, 00:10:03.549 "data_offset": 0, 00:10:03.549 "data_size": 65536 00:10:03.549 }, 00:10:03.549 { 00:10:03.549 "name": "BaseBdev3", 00:10:03.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.549 "is_configured": false, 00:10:03.549 "data_offset": 0, 00:10:03.549 "data_size": 0 00:10:03.549 } 00:10:03.549 ] 00:10:03.549 }' 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.549 16:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.116 [2024-10-08 16:17:57.186107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.116 [2024-10-08 16:17:57.186413] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.116 [2024-10-08 16:17:57.186497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:04.116 [2024-10-08 16:17:57.187049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.116 [2024-10-08 16:17:57.187284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.116 [2024-10-08 16:17:57.187304] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.116 [2024-10-08 16:17:57.187612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.116 BaseBdev3 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.116 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.116 [ 00:10:04.116 { 00:10:04.116 "name": "BaseBdev3", 00:10:04.116 "aliases": [ 00:10:04.116 "489dfd2e-008c-4b02-aaf4-b7a210c3d58e" 00:10:04.116 ], 00:10:04.116 "product_name": "Malloc disk", 00:10:04.116 "block_size": 512, 00:10:04.116 "num_blocks": 65536, 00:10:04.116 "uuid": "489dfd2e-008c-4b02-aaf4-b7a210c3d58e", 00:10:04.116 "assigned_rate_limits": { 00:10:04.116 "rw_ios_per_sec": 0, 00:10:04.116 "rw_mbytes_per_sec": 0, 00:10:04.116 "r_mbytes_per_sec": 0, 00:10:04.116 "w_mbytes_per_sec": 0 00:10:04.116 }, 00:10:04.116 "claimed": true, 00:10:04.116 "claim_type": "exclusive_write", 00:10:04.116 "zoned": false, 00:10:04.116 "supported_io_types": { 00:10:04.116 "read": true, 00:10:04.116 "write": true, 00:10:04.116 "unmap": true, 00:10:04.117 "flush": true, 00:10:04.117 "reset": true, 00:10:04.117 "nvme_admin": false, 00:10:04.117 "nvme_io": false, 00:10:04.117 "nvme_io_md": false, 00:10:04.117 "write_zeroes": true, 00:10:04.117 "zcopy": true, 00:10:04.117 "get_zone_info": false, 00:10:04.117 "zone_management": false, 00:10:04.117 "zone_append": false, 00:10:04.117 "compare": false, 00:10:04.117 "compare_and_write": false, 00:10:04.117 "abort": true, 00:10:04.117 "seek_hole": false, 00:10:04.117 "seek_data": false, 00:10:04.117 "copy": true, 00:10:04.117 "nvme_iov_md": false 00:10:04.117 }, 00:10:04.117 "memory_domains": [ 00:10:04.117 { 00:10:04.117 "dma_device_id": "system", 00:10:04.117 "dma_device_type": 1 00:10:04.117 }, 00:10:04.117 { 00:10:04.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.117 "dma_device_type": 2 00:10:04.117 } 00:10:04.117 ], 00:10:04.117 "driver_specific": {} 00:10:04.117 } 00:10:04.117 ] 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.117 "name": "Existed_Raid", 00:10:04.117 "uuid": "13fb6899-0777-4c87-87e4-5e81a3797063", 00:10:04.117 "strip_size_kb": 64, 00:10:04.117 "state": "online", 00:10:04.117 "raid_level": "raid0", 00:10:04.117 "superblock": false, 00:10:04.117 "num_base_bdevs": 3, 00:10:04.117 "num_base_bdevs_discovered": 3, 00:10:04.117 "num_base_bdevs_operational": 3, 00:10:04.117 "base_bdevs_list": [ 00:10:04.117 { 00:10:04.117 "name": "BaseBdev1", 00:10:04.117 "uuid": "07772ba4-95df-4a6d-8993-c24764a515ae", 00:10:04.117 "is_configured": true, 00:10:04.117 "data_offset": 0, 00:10:04.117 "data_size": 65536 00:10:04.117 }, 00:10:04.117 { 00:10:04.117 "name": "BaseBdev2", 00:10:04.117 "uuid": "c8889d81-90c9-40bf-bed9-d35d909b664b", 00:10:04.117 "is_configured": true, 00:10:04.117 "data_offset": 0, 00:10:04.117 "data_size": 65536 00:10:04.117 }, 00:10:04.117 { 00:10:04.117 "name": "BaseBdev3", 00:10:04.117 "uuid": "489dfd2e-008c-4b02-aaf4-b7a210c3d58e", 00:10:04.117 "is_configured": true, 00:10:04.117 "data_offset": 0, 00:10:04.117 "data_size": 65536 00:10:04.117 } 00:10:04.117 ] 00:10:04.117 }' 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.117 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.682 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.682 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.683 [2024-10-08 16:17:57.730743] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.683 "name": "Existed_Raid", 00:10:04.683 "aliases": [ 00:10:04.683 "13fb6899-0777-4c87-87e4-5e81a3797063" 00:10:04.683 ], 00:10:04.683 "product_name": "Raid Volume", 00:10:04.683 "block_size": 512, 00:10:04.683 "num_blocks": 196608, 00:10:04.683 "uuid": "13fb6899-0777-4c87-87e4-5e81a3797063", 00:10:04.683 "assigned_rate_limits": { 00:10:04.683 "rw_ios_per_sec": 0, 00:10:04.683 "rw_mbytes_per_sec": 0, 00:10:04.683 "r_mbytes_per_sec": 0, 00:10:04.683 "w_mbytes_per_sec": 0 00:10:04.683 }, 00:10:04.683 "claimed": false, 00:10:04.683 "zoned": false, 00:10:04.683 "supported_io_types": { 00:10:04.683 "read": true, 00:10:04.683 "write": true, 00:10:04.683 "unmap": true, 00:10:04.683 "flush": true, 00:10:04.683 "reset": true, 00:10:04.683 "nvme_admin": false, 00:10:04.683 "nvme_io": false, 00:10:04.683 "nvme_io_md": false, 00:10:04.683 "write_zeroes": true, 00:10:04.683 "zcopy": false, 00:10:04.683 "get_zone_info": false, 00:10:04.683 "zone_management": false, 00:10:04.683 "zone_append": false, 00:10:04.683 "compare": false, 00:10:04.683 "compare_and_write": false, 00:10:04.683 "abort": false, 00:10:04.683 "seek_hole": false, 00:10:04.683 "seek_data": false, 00:10:04.683 "copy": false, 00:10:04.683 "nvme_iov_md": false 00:10:04.683 }, 00:10:04.683 "memory_domains": [ 00:10:04.683 { 00:10:04.683 "dma_device_id": "system", 00:10:04.683 "dma_device_type": 1 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.683 "dma_device_type": 2 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "dma_device_id": "system", 00:10:04.683 "dma_device_type": 1 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.683 "dma_device_type": 2 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "dma_device_id": "system", 00:10:04.683 "dma_device_type": 1 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.683 "dma_device_type": 2 00:10:04.683 } 00:10:04.683 ], 00:10:04.683 "driver_specific": { 00:10:04.683 "raid": { 00:10:04.683 "uuid": "13fb6899-0777-4c87-87e4-5e81a3797063", 00:10:04.683 "strip_size_kb": 64, 00:10:04.683 "state": "online", 00:10:04.683 "raid_level": "raid0", 00:10:04.683 "superblock": false, 00:10:04.683 "num_base_bdevs": 3, 00:10:04.683 "num_base_bdevs_discovered": 3, 00:10:04.683 "num_base_bdevs_operational": 3, 00:10:04.683 "base_bdevs_list": [ 00:10:04.683 { 00:10:04.683 "name": "BaseBdev1", 00:10:04.683 "uuid": "07772ba4-95df-4a6d-8993-c24764a515ae", 00:10:04.683 "is_configured": true, 00:10:04.683 "data_offset": 0, 00:10:04.683 "data_size": 65536 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "name": "BaseBdev2", 00:10:04.683 "uuid": "c8889d81-90c9-40bf-bed9-d35d909b664b", 00:10:04.683 "is_configured": true, 00:10:04.683 "data_offset": 0, 00:10:04.683 "data_size": 65536 00:10:04.683 }, 00:10:04.683 { 00:10:04.683 "name": "BaseBdev3", 00:10:04.683 "uuid": "489dfd2e-008c-4b02-aaf4-b7a210c3d58e", 00:10:04.683 "is_configured": true, 00:10:04.683 "data_offset": 0, 00:10:04.683 "data_size": 65536 00:10:04.683 } 00:10:04.683 ] 00:10:04.683 } 00:10:04.683 } 00:10:04.683 }' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.683 BaseBdev2 00:10:04.683 BaseBdev3' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.683 16:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.683 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.683 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.683 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.683 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.941 [2024-10-08 16:17:58.062455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.941 [2024-10-08 16:17:58.062491] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.941 [2024-10-08 16:17:58.062610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.941 "name": "Existed_Raid", 00:10:04.941 "uuid": "13fb6899-0777-4c87-87e4-5e81a3797063", 00:10:04.941 "strip_size_kb": 64, 00:10:04.941 "state": "offline", 00:10:04.941 "raid_level": "raid0", 00:10:04.941 "superblock": false, 00:10:04.941 "num_base_bdevs": 3, 00:10:04.941 "num_base_bdevs_discovered": 2, 00:10:04.941 "num_base_bdevs_operational": 2, 00:10:04.941 "base_bdevs_list": [ 00:10:04.941 { 00:10:04.941 "name": null, 00:10:04.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.941 "is_configured": false, 00:10:04.941 "data_offset": 0, 00:10:04.941 "data_size": 65536 00:10:04.941 }, 00:10:04.941 { 00:10:04.941 "name": "BaseBdev2", 00:10:04.941 "uuid": "c8889d81-90c9-40bf-bed9-d35d909b664b", 00:10:04.941 "is_configured": true, 00:10:04.941 "data_offset": 0, 00:10:04.941 "data_size": 65536 00:10:04.941 }, 00:10:04.941 { 00:10:04.941 "name": "BaseBdev3", 00:10:04.941 "uuid": "489dfd2e-008c-4b02-aaf4-b7a210c3d58e", 00:10:04.941 "is_configured": true, 00:10:04.941 "data_offset": 0, 00:10:04.941 "data_size": 65536 00:10:04.941 } 00:10:04.941 ] 00:10:04.941 }' 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.941 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.506 [2024-10-08 16:17:58.697954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.506 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.764 [2024-10-08 16:17:58.837123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.764 [2024-10-08 16:17:58.837350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.764 16:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.764 BaseBdev2 00:10:05.764 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.764 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:05.764 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.765 [ 00:10:05.765 { 00:10:05.765 "name": "BaseBdev2", 00:10:05.765 "aliases": [ 00:10:05.765 "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf" 00:10:05.765 ], 00:10:05.765 "product_name": "Malloc disk", 00:10:05.765 "block_size": 512, 00:10:05.765 "num_blocks": 65536, 00:10:05.765 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:05.765 "assigned_rate_limits": { 00:10:05.765 "rw_ios_per_sec": 0, 00:10:05.765 "rw_mbytes_per_sec": 0, 00:10:05.765 "r_mbytes_per_sec": 0, 00:10:05.765 "w_mbytes_per_sec": 0 00:10:05.765 }, 00:10:05.765 "claimed": false, 00:10:05.765 "zoned": false, 00:10:05.765 "supported_io_types": { 00:10:05.765 "read": true, 00:10:05.765 "write": true, 00:10:05.765 "unmap": true, 00:10:05.765 "flush": true, 00:10:05.765 "reset": true, 00:10:05.765 "nvme_admin": false, 00:10:05.765 "nvme_io": false, 00:10:05.765 "nvme_io_md": false, 00:10:05.765 "write_zeroes": true, 00:10:05.765 "zcopy": true, 00:10:05.765 "get_zone_info": false, 00:10:05.765 "zone_management": false, 00:10:05.765 "zone_append": false, 00:10:05.765 "compare": false, 00:10:05.765 "compare_and_write": false, 00:10:05.765 "abort": true, 00:10:05.765 "seek_hole": false, 00:10:05.765 "seek_data": false, 00:10:05.765 "copy": true, 00:10:05.765 "nvme_iov_md": false 00:10:05.765 }, 00:10:05.765 "memory_domains": [ 00:10:05.765 { 00:10:05.765 "dma_device_id": "system", 00:10:05.765 "dma_device_type": 1 00:10:05.765 }, 00:10:05.765 { 00:10:05.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.765 "dma_device_type": 2 00:10:05.765 } 00:10:05.765 ], 00:10:05.765 "driver_specific": {} 00:10:05.765 } 00:10:05.765 ] 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.765 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.023 BaseBdev3 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.023 [ 00:10:06.023 { 00:10:06.023 "name": "BaseBdev3", 00:10:06.023 "aliases": [ 00:10:06.023 "3478b031-7f79-45c4-978b-34e3453a1480" 00:10:06.023 ], 00:10:06.023 "product_name": "Malloc disk", 00:10:06.023 "block_size": 512, 00:10:06.023 "num_blocks": 65536, 00:10:06.023 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:06.023 "assigned_rate_limits": { 00:10:06.023 "rw_ios_per_sec": 0, 00:10:06.023 "rw_mbytes_per_sec": 0, 00:10:06.023 "r_mbytes_per_sec": 0, 00:10:06.023 "w_mbytes_per_sec": 0 00:10:06.023 }, 00:10:06.023 "claimed": false, 00:10:06.023 "zoned": false, 00:10:06.023 "supported_io_types": { 00:10:06.023 "read": true, 00:10:06.023 "write": true, 00:10:06.023 "unmap": true, 00:10:06.023 "flush": true, 00:10:06.023 "reset": true, 00:10:06.023 "nvme_admin": false, 00:10:06.023 "nvme_io": false, 00:10:06.023 "nvme_io_md": false, 00:10:06.023 "write_zeroes": true, 00:10:06.023 "zcopy": true, 00:10:06.023 "get_zone_info": false, 00:10:06.023 "zone_management": false, 00:10:06.023 "zone_append": false, 00:10:06.023 "compare": false, 00:10:06.023 "compare_and_write": false, 00:10:06.023 "abort": true, 00:10:06.023 "seek_hole": false, 00:10:06.023 "seek_data": false, 00:10:06.023 "copy": true, 00:10:06.023 "nvme_iov_md": false 00:10:06.023 }, 00:10:06.023 "memory_domains": [ 00:10:06.023 { 00:10:06.023 "dma_device_id": "system", 00:10:06.023 "dma_device_type": 1 00:10:06.023 }, 00:10:06.023 { 00:10:06.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.023 "dma_device_type": 2 00:10:06.023 } 00:10:06.023 ], 00:10:06.023 "driver_specific": {} 00:10:06.023 } 00:10:06.023 ] 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.023 [2024-10-08 16:17:59.130299] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.023 [2024-10-08 16:17:59.130362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.023 [2024-10-08 16:17:59.130414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.023 [2024-10-08 16:17:59.132957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.023 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.024 "name": "Existed_Raid", 00:10:06.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.024 "strip_size_kb": 64, 00:10:06.024 "state": "configuring", 00:10:06.024 "raid_level": "raid0", 00:10:06.024 "superblock": false, 00:10:06.024 "num_base_bdevs": 3, 00:10:06.024 "num_base_bdevs_discovered": 2, 00:10:06.024 "num_base_bdevs_operational": 3, 00:10:06.024 "base_bdevs_list": [ 00:10:06.024 { 00:10:06.024 "name": "BaseBdev1", 00:10:06.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.024 "is_configured": false, 00:10:06.024 "data_offset": 0, 00:10:06.024 "data_size": 0 00:10:06.024 }, 00:10:06.024 { 00:10:06.024 "name": "BaseBdev2", 00:10:06.024 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:06.024 "is_configured": true, 00:10:06.024 "data_offset": 0, 00:10:06.024 "data_size": 65536 00:10:06.024 }, 00:10:06.024 { 00:10:06.024 "name": "BaseBdev3", 00:10:06.024 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:06.024 "is_configured": true, 00:10:06.024 "data_offset": 0, 00:10:06.024 "data_size": 65536 00:10:06.024 } 00:10:06.024 ] 00:10:06.024 }' 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.024 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 [2024-10-08 16:17:59.638446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.590 "name": "Existed_Raid", 00:10:06.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.590 "strip_size_kb": 64, 00:10:06.590 "state": "configuring", 00:10:06.590 "raid_level": "raid0", 00:10:06.590 "superblock": false, 00:10:06.590 "num_base_bdevs": 3, 00:10:06.590 "num_base_bdevs_discovered": 1, 00:10:06.590 "num_base_bdevs_operational": 3, 00:10:06.590 "base_bdevs_list": [ 00:10:06.590 { 00:10:06.590 "name": "BaseBdev1", 00:10:06.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.590 "is_configured": false, 00:10:06.590 "data_offset": 0, 00:10:06.590 "data_size": 0 00:10:06.590 }, 00:10:06.590 { 00:10:06.590 "name": null, 00:10:06.590 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:06.590 "is_configured": false, 00:10:06.590 "data_offset": 0, 00:10:06.590 "data_size": 65536 00:10:06.590 }, 00:10:06.590 { 00:10:06.590 "name": "BaseBdev3", 00:10:06.590 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:06.590 "is_configured": true, 00:10:06.590 "data_offset": 0, 00:10:06.590 "data_size": 65536 00:10:06.590 } 00:10:06.590 ] 00:10:06.590 }' 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.590 16:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.847 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.847 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.847 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.847 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.847 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.105 [2024-10-08 16:18:00.233299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.105 BaseBdev1 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.105 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.106 [ 00:10:07.106 { 00:10:07.106 "name": "BaseBdev1", 00:10:07.106 "aliases": [ 00:10:07.106 "a71ef055-a018-49ff-98c8-b89c7510c907" 00:10:07.106 ], 00:10:07.106 "product_name": "Malloc disk", 00:10:07.106 "block_size": 512, 00:10:07.106 "num_blocks": 65536, 00:10:07.106 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:07.106 "assigned_rate_limits": { 00:10:07.106 "rw_ios_per_sec": 0, 00:10:07.106 "rw_mbytes_per_sec": 0, 00:10:07.106 "r_mbytes_per_sec": 0, 00:10:07.106 "w_mbytes_per_sec": 0 00:10:07.106 }, 00:10:07.106 "claimed": true, 00:10:07.106 "claim_type": "exclusive_write", 00:10:07.106 "zoned": false, 00:10:07.106 "supported_io_types": { 00:10:07.106 "read": true, 00:10:07.106 "write": true, 00:10:07.106 "unmap": true, 00:10:07.106 "flush": true, 00:10:07.106 "reset": true, 00:10:07.106 "nvme_admin": false, 00:10:07.106 "nvme_io": false, 00:10:07.106 "nvme_io_md": false, 00:10:07.106 "write_zeroes": true, 00:10:07.106 "zcopy": true, 00:10:07.106 "get_zone_info": false, 00:10:07.106 "zone_management": false, 00:10:07.106 "zone_append": false, 00:10:07.106 "compare": false, 00:10:07.106 "compare_and_write": false, 00:10:07.106 "abort": true, 00:10:07.106 "seek_hole": false, 00:10:07.106 "seek_data": false, 00:10:07.106 "copy": true, 00:10:07.106 "nvme_iov_md": false 00:10:07.106 }, 00:10:07.106 "memory_domains": [ 00:10:07.106 { 00:10:07.106 "dma_device_id": "system", 00:10:07.106 "dma_device_type": 1 00:10:07.106 }, 00:10:07.106 { 00:10:07.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.106 "dma_device_type": 2 00:10:07.106 } 00:10:07.106 ], 00:10:07.106 "driver_specific": {} 00:10:07.106 } 00:10:07.106 ] 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.106 "name": "Existed_Raid", 00:10:07.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.106 "strip_size_kb": 64, 00:10:07.106 "state": "configuring", 00:10:07.106 "raid_level": "raid0", 00:10:07.106 "superblock": false, 00:10:07.106 "num_base_bdevs": 3, 00:10:07.106 "num_base_bdevs_discovered": 2, 00:10:07.106 "num_base_bdevs_operational": 3, 00:10:07.106 "base_bdevs_list": [ 00:10:07.106 { 00:10:07.106 "name": "BaseBdev1", 00:10:07.106 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:07.106 "is_configured": true, 00:10:07.106 "data_offset": 0, 00:10:07.106 "data_size": 65536 00:10:07.106 }, 00:10:07.106 { 00:10:07.106 "name": null, 00:10:07.106 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:07.106 "is_configured": false, 00:10:07.106 "data_offset": 0, 00:10:07.106 "data_size": 65536 00:10:07.106 }, 00:10:07.106 { 00:10:07.106 "name": "BaseBdev3", 00:10:07.106 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:07.106 "is_configured": true, 00:10:07.106 "data_offset": 0, 00:10:07.106 "data_size": 65536 00:10:07.106 } 00:10:07.106 ] 00:10:07.106 }' 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.106 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 [2024-10-08 16:18:00.845564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.670 "name": "Existed_Raid", 00:10:07.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.670 "strip_size_kb": 64, 00:10:07.670 "state": "configuring", 00:10:07.670 "raid_level": "raid0", 00:10:07.670 "superblock": false, 00:10:07.670 "num_base_bdevs": 3, 00:10:07.670 "num_base_bdevs_discovered": 1, 00:10:07.670 "num_base_bdevs_operational": 3, 00:10:07.670 "base_bdevs_list": [ 00:10:07.670 { 00:10:07.670 "name": "BaseBdev1", 00:10:07.670 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:07.670 "is_configured": true, 00:10:07.670 "data_offset": 0, 00:10:07.670 "data_size": 65536 00:10:07.670 }, 00:10:07.670 { 00:10:07.670 "name": null, 00:10:07.670 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:07.670 "is_configured": false, 00:10:07.670 "data_offset": 0, 00:10:07.670 "data_size": 65536 00:10:07.670 }, 00:10:07.670 { 00:10:07.670 "name": null, 00:10:07.670 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:07.670 "is_configured": false, 00:10:07.670 "data_offset": 0, 00:10:07.670 "data_size": 65536 00:10:07.670 } 00:10:07.670 ] 00:10:07.670 }' 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.670 16:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.236 [2024-10-08 16:18:01.417727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.236 "name": "Existed_Raid", 00:10:08.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.236 "strip_size_kb": 64, 00:10:08.236 "state": "configuring", 00:10:08.236 "raid_level": "raid0", 00:10:08.236 "superblock": false, 00:10:08.236 "num_base_bdevs": 3, 00:10:08.236 "num_base_bdevs_discovered": 2, 00:10:08.236 "num_base_bdevs_operational": 3, 00:10:08.236 "base_bdevs_list": [ 00:10:08.236 { 00:10:08.236 "name": "BaseBdev1", 00:10:08.236 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:08.236 "is_configured": true, 00:10:08.236 "data_offset": 0, 00:10:08.236 "data_size": 65536 00:10:08.236 }, 00:10:08.236 { 00:10:08.236 "name": null, 00:10:08.236 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:08.236 "is_configured": false, 00:10:08.236 "data_offset": 0, 00:10:08.236 "data_size": 65536 00:10:08.236 }, 00:10:08.236 { 00:10:08.236 "name": "BaseBdev3", 00:10:08.236 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:08.236 "is_configured": true, 00:10:08.236 "data_offset": 0, 00:10:08.236 "data_size": 65536 00:10:08.236 } 00:10:08.236 ] 00:10:08.236 }' 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.236 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.800 16:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.800 [2024-10-08 16:18:01.985923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.800 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.801 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.801 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.801 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.801 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.058 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.058 "name": "Existed_Raid", 00:10:09.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.058 "strip_size_kb": 64, 00:10:09.058 "state": "configuring", 00:10:09.058 "raid_level": "raid0", 00:10:09.058 "superblock": false, 00:10:09.058 "num_base_bdevs": 3, 00:10:09.058 "num_base_bdevs_discovered": 1, 00:10:09.058 "num_base_bdevs_operational": 3, 00:10:09.058 "base_bdevs_list": [ 00:10:09.058 { 00:10:09.058 "name": null, 00:10:09.058 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:09.058 "is_configured": false, 00:10:09.058 "data_offset": 0, 00:10:09.058 "data_size": 65536 00:10:09.058 }, 00:10:09.058 { 00:10:09.058 "name": null, 00:10:09.058 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:09.058 "is_configured": false, 00:10:09.058 "data_offset": 0, 00:10:09.058 "data_size": 65536 00:10:09.058 }, 00:10:09.058 { 00:10:09.058 "name": "BaseBdev3", 00:10:09.058 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:09.058 "is_configured": true, 00:10:09.058 "data_offset": 0, 00:10:09.058 "data_size": 65536 00:10:09.058 } 00:10:09.058 ] 00:10:09.058 }' 00:10:09.058 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.058 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.315 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.315 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.315 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.315 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.315 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.595 [2024-10-08 16:18:02.643274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.595 "name": "Existed_Raid", 00:10:09.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.595 "strip_size_kb": 64, 00:10:09.595 "state": "configuring", 00:10:09.595 "raid_level": "raid0", 00:10:09.595 "superblock": false, 00:10:09.595 "num_base_bdevs": 3, 00:10:09.595 "num_base_bdevs_discovered": 2, 00:10:09.595 "num_base_bdevs_operational": 3, 00:10:09.595 "base_bdevs_list": [ 00:10:09.595 { 00:10:09.595 "name": null, 00:10:09.595 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:09.595 "is_configured": false, 00:10:09.595 "data_offset": 0, 00:10:09.595 "data_size": 65536 00:10:09.595 }, 00:10:09.595 { 00:10:09.595 "name": "BaseBdev2", 00:10:09.595 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:09.595 "is_configured": true, 00:10:09.595 "data_offset": 0, 00:10:09.595 "data_size": 65536 00:10:09.595 }, 00:10:09.595 { 00:10:09.595 "name": "BaseBdev3", 00:10:09.595 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:09.595 "is_configured": true, 00:10:09.595 "data_offset": 0, 00:10:09.595 "data_size": 65536 00:10:09.595 } 00:10:09.595 ] 00:10:09.595 }' 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.595 16:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.854 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.854 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.854 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.854 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.854 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a71ef055-a018-49ff-98c8-b89c7510c907 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.113 [2024-10-08 16:18:03.297945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.113 [2024-10-08 16:18:03.298206] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.113 [2024-10-08 16:18:03.298238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:10.113 [2024-10-08 16:18:03.298595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.113 [2024-10-08 16:18:03.298792] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.113 [2024-10-08 16:18:03.298809] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:10.113 [2024-10-08 16:18:03.299111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.113 NewBaseBdev 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.113 [ 00:10:10.113 { 00:10:10.113 "name": "NewBaseBdev", 00:10:10.113 "aliases": [ 00:10:10.113 "a71ef055-a018-49ff-98c8-b89c7510c907" 00:10:10.113 ], 00:10:10.113 "product_name": "Malloc disk", 00:10:10.113 "block_size": 512, 00:10:10.113 "num_blocks": 65536, 00:10:10.113 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:10.113 "assigned_rate_limits": { 00:10:10.113 "rw_ios_per_sec": 0, 00:10:10.113 "rw_mbytes_per_sec": 0, 00:10:10.113 "r_mbytes_per_sec": 0, 00:10:10.113 "w_mbytes_per_sec": 0 00:10:10.113 }, 00:10:10.113 "claimed": true, 00:10:10.113 "claim_type": "exclusive_write", 00:10:10.113 "zoned": false, 00:10:10.113 "supported_io_types": { 00:10:10.113 "read": true, 00:10:10.113 "write": true, 00:10:10.113 "unmap": true, 00:10:10.113 "flush": true, 00:10:10.113 "reset": true, 00:10:10.113 "nvme_admin": false, 00:10:10.113 "nvme_io": false, 00:10:10.113 "nvme_io_md": false, 00:10:10.113 "write_zeroes": true, 00:10:10.113 "zcopy": true, 00:10:10.113 "get_zone_info": false, 00:10:10.113 "zone_management": false, 00:10:10.113 "zone_append": false, 00:10:10.113 "compare": false, 00:10:10.113 "compare_and_write": false, 00:10:10.113 "abort": true, 00:10:10.113 "seek_hole": false, 00:10:10.113 "seek_data": false, 00:10:10.113 "copy": true, 00:10:10.113 "nvme_iov_md": false 00:10:10.113 }, 00:10:10.113 "memory_domains": [ 00:10:10.113 { 00:10:10.113 "dma_device_id": "system", 00:10:10.113 "dma_device_type": 1 00:10:10.113 }, 00:10:10.113 { 00:10:10.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.113 "dma_device_type": 2 00:10:10.113 } 00:10:10.113 ], 00:10:10.113 "driver_specific": {} 00:10:10.113 } 00:10:10.113 ] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.113 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.114 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.114 "name": "Existed_Raid", 00:10:10.114 "uuid": "cab64e9a-8d77-4190-84ca-48dce0bbd6bc", 00:10:10.114 "strip_size_kb": 64, 00:10:10.114 "state": "online", 00:10:10.114 "raid_level": "raid0", 00:10:10.114 "superblock": false, 00:10:10.114 "num_base_bdevs": 3, 00:10:10.114 "num_base_bdevs_discovered": 3, 00:10:10.114 "num_base_bdevs_operational": 3, 00:10:10.114 "base_bdevs_list": [ 00:10:10.114 { 00:10:10.114 "name": "NewBaseBdev", 00:10:10.114 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:10.114 "is_configured": true, 00:10:10.114 "data_offset": 0, 00:10:10.114 "data_size": 65536 00:10:10.114 }, 00:10:10.114 { 00:10:10.114 "name": "BaseBdev2", 00:10:10.114 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:10.114 "is_configured": true, 00:10:10.114 "data_offset": 0, 00:10:10.114 "data_size": 65536 00:10:10.114 }, 00:10:10.114 { 00:10:10.114 "name": "BaseBdev3", 00:10:10.114 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:10.114 "is_configured": true, 00:10:10.114 "data_offset": 0, 00:10:10.114 "data_size": 65536 00:10:10.114 } 00:10:10.114 ] 00:10:10.114 }' 00:10:10.114 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.114 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.678 [2024-10-08 16:18:03.866625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.678 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.678 "name": "Existed_Raid", 00:10:10.678 "aliases": [ 00:10:10.678 "cab64e9a-8d77-4190-84ca-48dce0bbd6bc" 00:10:10.678 ], 00:10:10.678 "product_name": "Raid Volume", 00:10:10.678 "block_size": 512, 00:10:10.678 "num_blocks": 196608, 00:10:10.678 "uuid": "cab64e9a-8d77-4190-84ca-48dce0bbd6bc", 00:10:10.678 "assigned_rate_limits": { 00:10:10.678 "rw_ios_per_sec": 0, 00:10:10.678 "rw_mbytes_per_sec": 0, 00:10:10.678 "r_mbytes_per_sec": 0, 00:10:10.678 "w_mbytes_per_sec": 0 00:10:10.678 }, 00:10:10.678 "claimed": false, 00:10:10.678 "zoned": false, 00:10:10.678 "supported_io_types": { 00:10:10.678 "read": true, 00:10:10.678 "write": true, 00:10:10.678 "unmap": true, 00:10:10.678 "flush": true, 00:10:10.678 "reset": true, 00:10:10.678 "nvme_admin": false, 00:10:10.678 "nvme_io": false, 00:10:10.678 "nvme_io_md": false, 00:10:10.678 "write_zeroes": true, 00:10:10.678 "zcopy": false, 00:10:10.678 "get_zone_info": false, 00:10:10.678 "zone_management": false, 00:10:10.678 "zone_append": false, 00:10:10.678 "compare": false, 00:10:10.678 "compare_and_write": false, 00:10:10.678 "abort": false, 00:10:10.678 "seek_hole": false, 00:10:10.678 "seek_data": false, 00:10:10.678 "copy": false, 00:10:10.678 "nvme_iov_md": false 00:10:10.678 }, 00:10:10.678 "memory_domains": [ 00:10:10.678 { 00:10:10.678 "dma_device_id": "system", 00:10:10.678 "dma_device_type": 1 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.678 "dma_device_type": 2 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "dma_device_id": "system", 00:10:10.678 "dma_device_type": 1 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.678 "dma_device_type": 2 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "dma_device_id": "system", 00:10:10.678 "dma_device_type": 1 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.678 "dma_device_type": 2 00:10:10.678 } 00:10:10.678 ], 00:10:10.678 "driver_specific": { 00:10:10.678 "raid": { 00:10:10.678 "uuid": "cab64e9a-8d77-4190-84ca-48dce0bbd6bc", 00:10:10.678 "strip_size_kb": 64, 00:10:10.678 "state": "online", 00:10:10.678 "raid_level": "raid0", 00:10:10.678 "superblock": false, 00:10:10.678 "num_base_bdevs": 3, 00:10:10.678 "num_base_bdevs_discovered": 3, 00:10:10.678 "num_base_bdevs_operational": 3, 00:10:10.678 "base_bdevs_list": [ 00:10:10.678 { 00:10:10.678 "name": "NewBaseBdev", 00:10:10.678 "uuid": "a71ef055-a018-49ff-98c8-b89c7510c907", 00:10:10.678 "is_configured": true, 00:10:10.678 "data_offset": 0, 00:10:10.678 "data_size": 65536 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "name": "BaseBdev2", 00:10:10.678 "uuid": "69683e0c-0c58-493a-bdb0-f23a1ca8cdcf", 00:10:10.678 "is_configured": true, 00:10:10.678 "data_offset": 0, 00:10:10.678 "data_size": 65536 00:10:10.678 }, 00:10:10.678 { 00:10:10.678 "name": "BaseBdev3", 00:10:10.678 "uuid": "3478b031-7f79-45c4-978b-34e3453a1480", 00:10:10.678 "is_configured": true, 00:10:10.678 "data_offset": 0, 00:10:10.678 "data_size": 65536 00:10:10.678 } 00:10:10.678 ] 00:10:10.678 } 00:10:10.678 } 00:10:10.679 }' 00:10:10.679 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.679 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.679 BaseBdev2 00:10:10.679 BaseBdev3' 00:10:10.679 16:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.936 [2024-10-08 16:18:04.190283] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.936 [2024-10-08 16:18:04.190319] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.936 [2024-10-08 16:18:04.190427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.936 [2024-10-08 16:18:04.190498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.936 [2024-10-08 16:18:04.190517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64095 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 64095 ']' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 64095 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64095 00:10:10.936 killing process with pid 64095 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64095' 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 64095 00:10:10.936 [2024-10-08 16:18:04.229140] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.936 16:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 64095 00:10:11.194 [2024-10-08 16:18:04.499588] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.602 00:10:12.602 real 0m11.957s 00:10:12.602 user 0m19.735s 00:10:12.602 sys 0m1.606s 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.602 ************************************ 00:10:12.602 END TEST raid_state_function_test 00:10:12.602 ************************************ 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.602 16:18:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:12.602 16:18:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:12.602 16:18:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.602 16:18:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.602 ************************************ 00:10:12.602 START TEST raid_state_function_test_sb 00:10:12.602 ************************************ 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.602 Process raid pid: 64727 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64727 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64727' 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64727 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64727 ']' 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.602 16:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.602 [2024-10-08 16:18:05.890077] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:12.602 [2024-10-08 16:18:05.890616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.859 [2024-10-08 16:18:06.068081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.118 [2024-10-08 16:18:06.347117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.375 [2024-10-08 16:18:06.553777] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.375 [2024-10-08 16:18:06.553839] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.633 [2024-10-08 16:18:06.908667] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.633 [2024-10-08 16:18:06.908902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.633 [2024-10-08 16:18:06.908933] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.633 [2024-10-08 16:18:06.908955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.633 [2024-10-08 16:18:06.908966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.633 [2024-10-08 16:18:06.908980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.633 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.917 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.917 "name": "Existed_Raid", 00:10:13.917 "uuid": "a3221329-2ec2-4953-a285-19451fa9ce17", 00:10:13.917 "strip_size_kb": 64, 00:10:13.917 "state": "configuring", 00:10:13.917 "raid_level": "raid0", 00:10:13.917 "superblock": true, 00:10:13.917 "num_base_bdevs": 3, 00:10:13.917 "num_base_bdevs_discovered": 0, 00:10:13.917 "num_base_bdevs_operational": 3, 00:10:13.917 "base_bdevs_list": [ 00:10:13.917 { 00:10:13.917 "name": "BaseBdev1", 00:10:13.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.917 "is_configured": false, 00:10:13.917 "data_offset": 0, 00:10:13.917 "data_size": 0 00:10:13.917 }, 00:10:13.917 { 00:10:13.917 "name": "BaseBdev2", 00:10:13.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.917 "is_configured": false, 00:10:13.917 "data_offset": 0, 00:10:13.917 "data_size": 0 00:10:13.917 }, 00:10:13.917 { 00:10:13.917 "name": "BaseBdev3", 00:10:13.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.917 "is_configured": false, 00:10:13.917 "data_offset": 0, 00:10:13.917 "data_size": 0 00:10:13.917 } 00:10:13.917 ] 00:10:13.917 }' 00:10:13.917 16:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.917 16:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.175 [2024-10-08 16:18:07.404669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.175 [2024-10-08 16:18:07.404721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.175 [2024-10-08 16:18:07.416676] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.175 [2024-10-08 16:18:07.416739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.175 [2024-10-08 16:18:07.416757] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.175 [2024-10-08 16:18:07.416774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.175 [2024-10-08 16:18:07.416783] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.175 [2024-10-08 16:18:07.416809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.175 [2024-10-08 16:18:07.478175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.175 BaseBdev1 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.175 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.433 [ 00:10:14.433 { 00:10:14.433 "name": "BaseBdev1", 00:10:14.433 "aliases": [ 00:10:14.433 "bd0793e2-cfc8-4e41-87e7-df85690f7238" 00:10:14.433 ], 00:10:14.433 "product_name": "Malloc disk", 00:10:14.433 "block_size": 512, 00:10:14.433 "num_blocks": 65536, 00:10:14.433 "uuid": "bd0793e2-cfc8-4e41-87e7-df85690f7238", 00:10:14.433 "assigned_rate_limits": { 00:10:14.433 "rw_ios_per_sec": 0, 00:10:14.433 "rw_mbytes_per_sec": 0, 00:10:14.433 "r_mbytes_per_sec": 0, 00:10:14.433 "w_mbytes_per_sec": 0 00:10:14.433 }, 00:10:14.433 "claimed": true, 00:10:14.433 "claim_type": "exclusive_write", 00:10:14.433 "zoned": false, 00:10:14.433 "supported_io_types": { 00:10:14.433 "read": true, 00:10:14.433 "write": true, 00:10:14.433 "unmap": true, 00:10:14.433 "flush": true, 00:10:14.433 "reset": true, 00:10:14.433 "nvme_admin": false, 00:10:14.433 "nvme_io": false, 00:10:14.433 "nvme_io_md": false, 00:10:14.433 "write_zeroes": true, 00:10:14.433 "zcopy": true, 00:10:14.433 "get_zone_info": false, 00:10:14.433 "zone_management": false, 00:10:14.433 "zone_append": false, 00:10:14.433 "compare": false, 00:10:14.433 "compare_and_write": false, 00:10:14.433 "abort": true, 00:10:14.433 "seek_hole": false, 00:10:14.433 "seek_data": false, 00:10:14.433 "copy": true, 00:10:14.433 "nvme_iov_md": false 00:10:14.433 }, 00:10:14.433 "memory_domains": [ 00:10:14.433 { 00:10:14.433 "dma_device_id": "system", 00:10:14.433 "dma_device_type": 1 00:10:14.433 }, 00:10:14.433 { 00:10:14.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.433 "dma_device_type": 2 00:10:14.433 } 00:10:14.433 ], 00:10:14.433 "driver_specific": {} 00:10:14.433 } 00:10:14.433 ] 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.433 "name": "Existed_Raid", 00:10:14.433 "uuid": "5ae8f61a-9d63-4ccb-b508-ca2f3ca363e7", 00:10:14.433 "strip_size_kb": 64, 00:10:14.433 "state": "configuring", 00:10:14.433 "raid_level": "raid0", 00:10:14.433 "superblock": true, 00:10:14.433 "num_base_bdevs": 3, 00:10:14.433 "num_base_bdevs_discovered": 1, 00:10:14.433 "num_base_bdevs_operational": 3, 00:10:14.433 "base_bdevs_list": [ 00:10:14.433 { 00:10:14.433 "name": "BaseBdev1", 00:10:14.433 "uuid": "bd0793e2-cfc8-4e41-87e7-df85690f7238", 00:10:14.433 "is_configured": true, 00:10:14.433 "data_offset": 2048, 00:10:14.433 "data_size": 63488 00:10:14.433 }, 00:10:14.433 { 00:10:14.433 "name": "BaseBdev2", 00:10:14.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.433 "is_configured": false, 00:10:14.433 "data_offset": 0, 00:10:14.433 "data_size": 0 00:10:14.433 }, 00:10:14.433 { 00:10:14.433 "name": "BaseBdev3", 00:10:14.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.433 "is_configured": false, 00:10:14.433 "data_offset": 0, 00:10:14.433 "data_size": 0 00:10:14.433 } 00:10:14.433 ] 00:10:14.433 }' 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.433 16:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.691 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.691 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.691 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.691 [2024-10-08 16:18:08.014384] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.950 [2024-10-08 16:18:08.014608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 [2024-10-08 16:18:08.026398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.950 [2024-10-08 16:18:08.029036] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.950 [2024-10-08 16:18:08.029262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.950 [2024-10-08 16:18:08.029385] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.950 [2024-10-08 16:18:08.029567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.950 "name": "Existed_Raid", 00:10:14.950 "uuid": "e96579ed-f5db-4461-9678-08c7a91ddb4e", 00:10:14.950 "strip_size_kb": 64, 00:10:14.950 "state": "configuring", 00:10:14.950 "raid_level": "raid0", 00:10:14.950 "superblock": true, 00:10:14.950 "num_base_bdevs": 3, 00:10:14.950 "num_base_bdevs_discovered": 1, 00:10:14.950 "num_base_bdevs_operational": 3, 00:10:14.950 "base_bdevs_list": [ 00:10:14.950 { 00:10:14.950 "name": "BaseBdev1", 00:10:14.950 "uuid": "bd0793e2-cfc8-4e41-87e7-df85690f7238", 00:10:14.950 "is_configured": true, 00:10:14.950 "data_offset": 2048, 00:10:14.950 "data_size": 63488 00:10:14.950 }, 00:10:14.950 { 00:10:14.950 "name": "BaseBdev2", 00:10:14.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.950 "is_configured": false, 00:10:14.950 "data_offset": 0, 00:10:14.950 "data_size": 0 00:10:14.950 }, 00:10:14.950 { 00:10:14.950 "name": "BaseBdev3", 00:10:14.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.950 "is_configured": false, 00:10:14.950 "data_offset": 0, 00:10:14.950 "data_size": 0 00:10:14.950 } 00:10:14.950 ] 00:10:14.950 }' 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.950 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.525 [2024-10-08 16:18:08.601639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.525 BaseBdev2 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.525 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.525 [ 00:10:15.525 { 00:10:15.525 "name": "BaseBdev2", 00:10:15.525 "aliases": [ 00:10:15.525 "8e6ac3ab-f411-4d3d-984b-b9a6ffbc1dcc" 00:10:15.525 ], 00:10:15.525 "product_name": "Malloc disk", 00:10:15.525 "block_size": 512, 00:10:15.525 "num_blocks": 65536, 00:10:15.525 "uuid": "8e6ac3ab-f411-4d3d-984b-b9a6ffbc1dcc", 00:10:15.525 "assigned_rate_limits": { 00:10:15.525 "rw_ios_per_sec": 0, 00:10:15.525 "rw_mbytes_per_sec": 0, 00:10:15.525 "r_mbytes_per_sec": 0, 00:10:15.525 "w_mbytes_per_sec": 0 00:10:15.525 }, 00:10:15.525 "claimed": true, 00:10:15.525 "claim_type": "exclusive_write", 00:10:15.525 "zoned": false, 00:10:15.525 "supported_io_types": { 00:10:15.525 "read": true, 00:10:15.525 "write": true, 00:10:15.525 "unmap": true, 00:10:15.525 "flush": true, 00:10:15.525 "reset": true, 00:10:15.525 "nvme_admin": false, 00:10:15.525 "nvme_io": false, 00:10:15.525 "nvme_io_md": false, 00:10:15.525 "write_zeroes": true, 00:10:15.525 "zcopy": true, 00:10:15.525 "get_zone_info": false, 00:10:15.525 "zone_management": false, 00:10:15.525 "zone_append": false, 00:10:15.525 "compare": false, 00:10:15.525 "compare_and_write": false, 00:10:15.525 "abort": true, 00:10:15.525 "seek_hole": false, 00:10:15.525 "seek_data": false, 00:10:15.525 "copy": true, 00:10:15.525 "nvme_iov_md": false 00:10:15.525 }, 00:10:15.525 "memory_domains": [ 00:10:15.525 { 00:10:15.526 "dma_device_id": "system", 00:10:15.526 "dma_device_type": 1 00:10:15.526 }, 00:10:15.526 { 00:10:15.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.526 "dma_device_type": 2 00:10:15.526 } 00:10:15.526 ], 00:10:15.526 "driver_specific": {} 00:10:15.526 } 00:10:15.526 ] 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.526 "name": "Existed_Raid", 00:10:15.526 "uuid": "e96579ed-f5db-4461-9678-08c7a91ddb4e", 00:10:15.526 "strip_size_kb": 64, 00:10:15.526 "state": "configuring", 00:10:15.526 "raid_level": "raid0", 00:10:15.526 "superblock": true, 00:10:15.526 "num_base_bdevs": 3, 00:10:15.526 "num_base_bdevs_discovered": 2, 00:10:15.526 "num_base_bdevs_operational": 3, 00:10:15.526 "base_bdevs_list": [ 00:10:15.526 { 00:10:15.526 "name": "BaseBdev1", 00:10:15.526 "uuid": "bd0793e2-cfc8-4e41-87e7-df85690f7238", 00:10:15.526 "is_configured": true, 00:10:15.526 "data_offset": 2048, 00:10:15.526 "data_size": 63488 00:10:15.526 }, 00:10:15.526 { 00:10:15.526 "name": "BaseBdev2", 00:10:15.526 "uuid": "8e6ac3ab-f411-4d3d-984b-b9a6ffbc1dcc", 00:10:15.526 "is_configured": true, 00:10:15.526 "data_offset": 2048, 00:10:15.526 "data_size": 63488 00:10:15.526 }, 00:10:15.526 { 00:10:15.526 "name": "BaseBdev3", 00:10:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.526 "is_configured": false, 00:10:15.526 "data_offset": 0, 00:10:15.526 "data_size": 0 00:10:15.526 } 00:10:15.526 ] 00:10:15.526 }' 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.526 16:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 [2024-10-08 16:18:09.216892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.093 [2024-10-08 16:18:09.217197] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.093 [2024-10-08 16:18:09.217229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.093 [2024-10-08 16:18:09.217578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.093 BaseBdev3 00:10:16.093 [2024-10-08 16:18:09.217769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.093 [2024-10-08 16:18:09.217801] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.093 [2024-10-08 16:18:09.217979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 [ 00:10:16.093 { 00:10:16.093 "name": "BaseBdev3", 00:10:16.093 "aliases": [ 00:10:16.093 "3207bdd7-9d4e-4689-8ee8-cf52c6fa6fde" 00:10:16.093 ], 00:10:16.093 "product_name": "Malloc disk", 00:10:16.093 "block_size": 512, 00:10:16.093 "num_blocks": 65536, 00:10:16.093 "uuid": "3207bdd7-9d4e-4689-8ee8-cf52c6fa6fde", 00:10:16.093 "assigned_rate_limits": { 00:10:16.093 "rw_ios_per_sec": 0, 00:10:16.093 "rw_mbytes_per_sec": 0, 00:10:16.093 "r_mbytes_per_sec": 0, 00:10:16.093 "w_mbytes_per_sec": 0 00:10:16.093 }, 00:10:16.093 "claimed": true, 00:10:16.093 "claim_type": "exclusive_write", 00:10:16.093 "zoned": false, 00:10:16.093 "supported_io_types": { 00:10:16.093 "read": true, 00:10:16.093 "write": true, 00:10:16.093 "unmap": true, 00:10:16.093 "flush": true, 00:10:16.093 "reset": true, 00:10:16.093 "nvme_admin": false, 00:10:16.093 "nvme_io": false, 00:10:16.093 "nvme_io_md": false, 00:10:16.093 "write_zeroes": true, 00:10:16.093 "zcopy": true, 00:10:16.093 "get_zone_info": false, 00:10:16.093 "zone_management": false, 00:10:16.093 "zone_append": false, 00:10:16.093 "compare": false, 00:10:16.093 "compare_and_write": false, 00:10:16.093 "abort": true, 00:10:16.093 "seek_hole": false, 00:10:16.093 "seek_data": false, 00:10:16.093 "copy": true, 00:10:16.093 "nvme_iov_md": false 00:10:16.093 }, 00:10:16.093 "memory_domains": [ 00:10:16.093 { 00:10:16.093 "dma_device_id": "system", 00:10:16.093 "dma_device_type": 1 00:10:16.093 }, 00:10:16.093 { 00:10:16.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.093 "dma_device_type": 2 00:10:16.093 } 00:10:16.093 ], 00:10:16.093 "driver_specific": {} 00:10:16.093 } 00:10:16.093 ] 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.093 "name": "Existed_Raid", 00:10:16.093 "uuid": "e96579ed-f5db-4461-9678-08c7a91ddb4e", 00:10:16.093 "strip_size_kb": 64, 00:10:16.093 "state": "online", 00:10:16.093 "raid_level": "raid0", 00:10:16.093 "superblock": true, 00:10:16.093 "num_base_bdevs": 3, 00:10:16.093 "num_base_bdevs_discovered": 3, 00:10:16.093 "num_base_bdevs_operational": 3, 00:10:16.093 "base_bdevs_list": [ 00:10:16.093 { 00:10:16.093 "name": "BaseBdev1", 00:10:16.093 "uuid": "bd0793e2-cfc8-4e41-87e7-df85690f7238", 00:10:16.093 "is_configured": true, 00:10:16.093 "data_offset": 2048, 00:10:16.093 "data_size": 63488 00:10:16.093 }, 00:10:16.093 { 00:10:16.093 "name": "BaseBdev2", 00:10:16.093 "uuid": "8e6ac3ab-f411-4d3d-984b-b9a6ffbc1dcc", 00:10:16.093 "is_configured": true, 00:10:16.093 "data_offset": 2048, 00:10:16.093 "data_size": 63488 00:10:16.093 }, 00:10:16.093 { 00:10:16.093 "name": "BaseBdev3", 00:10:16.093 "uuid": "3207bdd7-9d4e-4689-8ee8-cf52c6fa6fde", 00:10:16.093 "is_configured": true, 00:10:16.093 "data_offset": 2048, 00:10:16.093 "data_size": 63488 00:10:16.093 } 00:10:16.093 ] 00:10:16.093 }' 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.093 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.661 [2024-10-08 16:18:09.789534] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.661 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.661 "name": "Existed_Raid", 00:10:16.661 "aliases": [ 00:10:16.661 "e96579ed-f5db-4461-9678-08c7a91ddb4e" 00:10:16.661 ], 00:10:16.661 "product_name": "Raid Volume", 00:10:16.661 "block_size": 512, 00:10:16.661 "num_blocks": 190464, 00:10:16.661 "uuid": "e96579ed-f5db-4461-9678-08c7a91ddb4e", 00:10:16.661 "assigned_rate_limits": { 00:10:16.661 "rw_ios_per_sec": 0, 00:10:16.661 "rw_mbytes_per_sec": 0, 00:10:16.661 "r_mbytes_per_sec": 0, 00:10:16.661 "w_mbytes_per_sec": 0 00:10:16.661 }, 00:10:16.661 "claimed": false, 00:10:16.661 "zoned": false, 00:10:16.661 "supported_io_types": { 00:10:16.661 "read": true, 00:10:16.661 "write": true, 00:10:16.661 "unmap": true, 00:10:16.661 "flush": true, 00:10:16.661 "reset": true, 00:10:16.661 "nvme_admin": false, 00:10:16.661 "nvme_io": false, 00:10:16.661 "nvme_io_md": false, 00:10:16.661 "write_zeroes": true, 00:10:16.661 "zcopy": false, 00:10:16.661 "get_zone_info": false, 00:10:16.661 "zone_management": false, 00:10:16.661 "zone_append": false, 00:10:16.661 "compare": false, 00:10:16.661 "compare_and_write": false, 00:10:16.661 "abort": false, 00:10:16.661 "seek_hole": false, 00:10:16.661 "seek_data": false, 00:10:16.661 "copy": false, 00:10:16.661 "nvme_iov_md": false 00:10:16.661 }, 00:10:16.661 "memory_domains": [ 00:10:16.661 { 00:10:16.661 "dma_device_id": "system", 00:10:16.661 "dma_device_type": 1 00:10:16.661 }, 00:10:16.661 { 00:10:16.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.661 "dma_device_type": 2 00:10:16.661 }, 00:10:16.661 { 00:10:16.661 "dma_device_id": "system", 00:10:16.661 "dma_device_type": 1 00:10:16.661 }, 00:10:16.661 { 00:10:16.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.661 "dma_device_type": 2 00:10:16.661 }, 00:10:16.661 { 00:10:16.661 "dma_device_id": "system", 00:10:16.661 "dma_device_type": 1 00:10:16.661 }, 00:10:16.661 { 00:10:16.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.661 "dma_device_type": 2 00:10:16.661 } 00:10:16.661 ], 00:10:16.661 "driver_specific": { 00:10:16.661 "raid": { 00:10:16.662 "uuid": "e96579ed-f5db-4461-9678-08c7a91ddb4e", 00:10:16.662 "strip_size_kb": 64, 00:10:16.662 "state": "online", 00:10:16.662 "raid_level": "raid0", 00:10:16.662 "superblock": true, 00:10:16.662 "num_base_bdevs": 3, 00:10:16.662 "num_base_bdevs_discovered": 3, 00:10:16.662 "num_base_bdevs_operational": 3, 00:10:16.662 "base_bdevs_list": [ 00:10:16.662 { 00:10:16.662 "name": "BaseBdev1", 00:10:16.662 "uuid": "bd0793e2-cfc8-4e41-87e7-df85690f7238", 00:10:16.662 "is_configured": true, 00:10:16.662 "data_offset": 2048, 00:10:16.662 "data_size": 63488 00:10:16.662 }, 00:10:16.662 { 00:10:16.662 "name": "BaseBdev2", 00:10:16.662 "uuid": "8e6ac3ab-f411-4d3d-984b-b9a6ffbc1dcc", 00:10:16.662 "is_configured": true, 00:10:16.662 "data_offset": 2048, 00:10:16.662 "data_size": 63488 00:10:16.662 }, 00:10:16.662 { 00:10:16.662 "name": "BaseBdev3", 00:10:16.662 "uuid": "3207bdd7-9d4e-4689-8ee8-cf52c6fa6fde", 00:10:16.662 "is_configured": true, 00:10:16.662 "data_offset": 2048, 00:10:16.662 "data_size": 63488 00:10:16.662 } 00:10:16.662 ] 00:10:16.662 } 00:10:16.662 } 00:10:16.662 }' 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:16.662 BaseBdev2 00:10:16.662 BaseBdev3' 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.662 16:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.920 [2024-10-08 16:18:10.125261] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.920 [2024-10-08 16:18:10.125310] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.920 [2024-10-08 16:18:10.125389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.920 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.178 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.178 "name": "Existed_Raid", 00:10:17.178 "uuid": "e96579ed-f5db-4461-9678-08c7a91ddb4e", 00:10:17.178 "strip_size_kb": 64, 00:10:17.178 "state": "offline", 00:10:17.178 "raid_level": "raid0", 00:10:17.178 "superblock": true, 00:10:17.178 "num_base_bdevs": 3, 00:10:17.178 "num_base_bdevs_discovered": 2, 00:10:17.178 "num_base_bdevs_operational": 2, 00:10:17.178 "base_bdevs_list": [ 00:10:17.178 { 00:10:17.178 "name": null, 00:10:17.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.179 "is_configured": false, 00:10:17.179 "data_offset": 0, 00:10:17.179 "data_size": 63488 00:10:17.179 }, 00:10:17.179 { 00:10:17.179 "name": "BaseBdev2", 00:10:17.179 "uuid": "8e6ac3ab-f411-4d3d-984b-b9a6ffbc1dcc", 00:10:17.179 "is_configured": true, 00:10:17.179 "data_offset": 2048, 00:10:17.179 "data_size": 63488 00:10:17.179 }, 00:10:17.179 { 00:10:17.179 "name": "BaseBdev3", 00:10:17.179 "uuid": "3207bdd7-9d4e-4689-8ee8-cf52c6fa6fde", 00:10:17.179 "is_configured": true, 00:10:17.179 "data_offset": 2048, 00:10:17.179 "data_size": 63488 00:10:17.179 } 00:10:17.179 ] 00:10:17.179 }' 00:10:17.179 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.179 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.436 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.695 [2024-10-08 16:18:10.759637] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.695 [2024-10-08 16:18:10.904327] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.695 [2024-10-08 16:18:10.904618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.695 16:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.695 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 BaseBdev2 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 [ 00:10:17.955 { 00:10:17.955 "name": "BaseBdev2", 00:10:17.955 "aliases": [ 00:10:17.955 "f150c3ab-b557-4a69-8ed0-8973b1470773" 00:10:17.955 ], 00:10:17.955 "product_name": "Malloc disk", 00:10:17.955 "block_size": 512, 00:10:17.955 "num_blocks": 65536, 00:10:17.955 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:17.955 "assigned_rate_limits": { 00:10:17.955 "rw_ios_per_sec": 0, 00:10:17.955 "rw_mbytes_per_sec": 0, 00:10:17.955 "r_mbytes_per_sec": 0, 00:10:17.955 "w_mbytes_per_sec": 0 00:10:17.955 }, 00:10:17.955 "claimed": false, 00:10:17.955 "zoned": false, 00:10:17.955 "supported_io_types": { 00:10:17.955 "read": true, 00:10:17.955 "write": true, 00:10:17.955 "unmap": true, 00:10:17.955 "flush": true, 00:10:17.955 "reset": true, 00:10:17.955 "nvme_admin": false, 00:10:17.955 "nvme_io": false, 00:10:17.955 "nvme_io_md": false, 00:10:17.955 "write_zeroes": true, 00:10:17.955 "zcopy": true, 00:10:17.955 "get_zone_info": false, 00:10:17.955 "zone_management": false, 00:10:17.955 "zone_append": false, 00:10:17.955 "compare": false, 00:10:17.955 "compare_and_write": false, 00:10:17.955 "abort": true, 00:10:17.955 "seek_hole": false, 00:10:17.955 "seek_data": false, 00:10:17.955 "copy": true, 00:10:17.955 "nvme_iov_md": false 00:10:17.955 }, 00:10:17.955 "memory_domains": [ 00:10:17.955 { 00:10:17.955 "dma_device_id": "system", 00:10:17.955 "dma_device_type": 1 00:10:17.955 }, 00:10:17.955 { 00:10:17.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.955 "dma_device_type": 2 00:10:17.955 } 00:10:17.955 ], 00:10:17.955 "driver_specific": {} 00:10:17.955 } 00:10:17.955 ] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 BaseBdev3 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 [ 00:10:17.955 { 00:10:17.955 "name": "BaseBdev3", 00:10:17.955 "aliases": [ 00:10:17.955 "704ac316-fbcb-4411-9486-6d567d4790d8" 00:10:17.955 ], 00:10:17.955 "product_name": "Malloc disk", 00:10:17.955 "block_size": 512, 00:10:17.955 "num_blocks": 65536, 00:10:17.955 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:17.955 "assigned_rate_limits": { 00:10:17.955 "rw_ios_per_sec": 0, 00:10:17.955 "rw_mbytes_per_sec": 0, 00:10:17.955 "r_mbytes_per_sec": 0, 00:10:17.955 "w_mbytes_per_sec": 0 00:10:17.955 }, 00:10:17.955 "claimed": false, 00:10:17.955 "zoned": false, 00:10:17.955 "supported_io_types": { 00:10:17.955 "read": true, 00:10:17.955 "write": true, 00:10:17.955 "unmap": true, 00:10:17.955 "flush": true, 00:10:17.955 "reset": true, 00:10:17.955 "nvme_admin": false, 00:10:17.955 "nvme_io": false, 00:10:17.955 "nvme_io_md": false, 00:10:17.955 "write_zeroes": true, 00:10:17.955 "zcopy": true, 00:10:17.955 "get_zone_info": false, 00:10:17.955 "zone_management": false, 00:10:17.955 "zone_append": false, 00:10:17.955 "compare": false, 00:10:17.955 "compare_and_write": false, 00:10:17.955 "abort": true, 00:10:17.955 "seek_hole": false, 00:10:17.955 "seek_data": false, 00:10:17.955 "copy": true, 00:10:17.955 "nvme_iov_md": false 00:10:17.955 }, 00:10:17.955 "memory_domains": [ 00:10:17.955 { 00:10:17.955 "dma_device_id": "system", 00:10:17.955 "dma_device_type": 1 00:10:17.955 }, 00:10:17.955 { 00:10:17.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.955 "dma_device_type": 2 00:10:17.955 } 00:10:17.955 ], 00:10:17.955 "driver_specific": {} 00:10:17.955 } 00:10:17.955 ] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 [2024-10-08 16:18:11.200108] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.955 [2024-10-08 16:18:11.200424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.955 [2024-10-08 16:18:11.200637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.955 [2024-10-08 16:18:11.203235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.955 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.956 "name": "Existed_Raid", 00:10:17.956 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:17.956 "strip_size_kb": 64, 00:10:17.956 "state": "configuring", 00:10:17.956 "raid_level": "raid0", 00:10:17.956 "superblock": true, 00:10:17.956 "num_base_bdevs": 3, 00:10:17.956 "num_base_bdevs_discovered": 2, 00:10:17.956 "num_base_bdevs_operational": 3, 00:10:17.956 "base_bdevs_list": [ 00:10:17.956 { 00:10:17.956 "name": "BaseBdev1", 00:10:17.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.956 "is_configured": false, 00:10:17.956 "data_offset": 0, 00:10:17.956 "data_size": 0 00:10:17.956 }, 00:10:17.956 { 00:10:17.956 "name": "BaseBdev2", 00:10:17.956 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:17.956 "is_configured": true, 00:10:17.956 "data_offset": 2048, 00:10:17.956 "data_size": 63488 00:10:17.956 }, 00:10:17.956 { 00:10:17.956 "name": "BaseBdev3", 00:10:17.956 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:17.956 "is_configured": true, 00:10:17.956 "data_offset": 2048, 00:10:17.956 "data_size": 63488 00:10:17.956 } 00:10:17.956 ] 00:10:17.956 }' 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.956 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 [2024-10-08 16:18:11.736301] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.523 "name": "Existed_Raid", 00:10:18.523 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:18.523 "strip_size_kb": 64, 00:10:18.523 "state": "configuring", 00:10:18.523 "raid_level": "raid0", 00:10:18.523 "superblock": true, 00:10:18.523 "num_base_bdevs": 3, 00:10:18.523 "num_base_bdevs_discovered": 1, 00:10:18.523 "num_base_bdevs_operational": 3, 00:10:18.523 "base_bdevs_list": [ 00:10:18.523 { 00:10:18.523 "name": "BaseBdev1", 00:10:18.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.523 "is_configured": false, 00:10:18.523 "data_offset": 0, 00:10:18.523 "data_size": 0 00:10:18.523 }, 00:10:18.523 { 00:10:18.523 "name": null, 00:10:18.523 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:18.523 "is_configured": false, 00:10:18.523 "data_offset": 0, 00:10:18.523 "data_size": 63488 00:10:18.523 }, 00:10:18.523 { 00:10:18.523 "name": "BaseBdev3", 00:10:18.523 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:18.523 "is_configured": true, 00:10:18.523 "data_offset": 2048, 00:10:18.523 "data_size": 63488 00:10:18.523 } 00:10:18.523 ] 00:10:18.523 }' 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.523 16:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.090 [2024-10-08 16:18:12.354635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.090 BaseBdev1 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.090 [ 00:10:19.090 { 00:10:19.090 "name": "BaseBdev1", 00:10:19.090 "aliases": [ 00:10:19.090 "1d1521df-282e-414b-974d-d42841bb83cb" 00:10:19.090 ], 00:10:19.090 "product_name": "Malloc disk", 00:10:19.090 "block_size": 512, 00:10:19.090 "num_blocks": 65536, 00:10:19.090 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:19.090 "assigned_rate_limits": { 00:10:19.090 "rw_ios_per_sec": 0, 00:10:19.090 "rw_mbytes_per_sec": 0, 00:10:19.090 "r_mbytes_per_sec": 0, 00:10:19.090 "w_mbytes_per_sec": 0 00:10:19.090 }, 00:10:19.090 "claimed": true, 00:10:19.090 "claim_type": "exclusive_write", 00:10:19.090 "zoned": false, 00:10:19.090 "supported_io_types": { 00:10:19.090 "read": true, 00:10:19.090 "write": true, 00:10:19.090 "unmap": true, 00:10:19.090 "flush": true, 00:10:19.090 "reset": true, 00:10:19.090 "nvme_admin": false, 00:10:19.090 "nvme_io": false, 00:10:19.090 "nvme_io_md": false, 00:10:19.090 "write_zeroes": true, 00:10:19.090 "zcopy": true, 00:10:19.090 "get_zone_info": false, 00:10:19.090 "zone_management": false, 00:10:19.090 "zone_append": false, 00:10:19.090 "compare": false, 00:10:19.090 "compare_and_write": false, 00:10:19.090 "abort": true, 00:10:19.090 "seek_hole": false, 00:10:19.090 "seek_data": false, 00:10:19.090 "copy": true, 00:10:19.090 "nvme_iov_md": false 00:10:19.090 }, 00:10:19.090 "memory_domains": [ 00:10:19.090 { 00:10:19.090 "dma_device_id": "system", 00:10:19.090 "dma_device_type": 1 00:10:19.090 }, 00:10:19.090 { 00:10:19.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.090 "dma_device_type": 2 00:10:19.090 } 00:10:19.090 ], 00:10:19.090 "driver_specific": {} 00:10:19.090 } 00:10:19.090 ] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.090 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.349 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.349 "name": "Existed_Raid", 00:10:19.349 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:19.349 "strip_size_kb": 64, 00:10:19.349 "state": "configuring", 00:10:19.349 "raid_level": "raid0", 00:10:19.349 "superblock": true, 00:10:19.349 "num_base_bdevs": 3, 00:10:19.349 "num_base_bdevs_discovered": 2, 00:10:19.349 "num_base_bdevs_operational": 3, 00:10:19.349 "base_bdevs_list": [ 00:10:19.349 { 00:10:19.349 "name": "BaseBdev1", 00:10:19.349 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:19.349 "is_configured": true, 00:10:19.349 "data_offset": 2048, 00:10:19.349 "data_size": 63488 00:10:19.349 }, 00:10:19.349 { 00:10:19.349 "name": null, 00:10:19.349 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:19.349 "is_configured": false, 00:10:19.349 "data_offset": 0, 00:10:19.349 "data_size": 63488 00:10:19.349 }, 00:10:19.349 { 00:10:19.349 "name": "BaseBdev3", 00:10:19.349 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:19.349 "is_configured": true, 00:10:19.349 "data_offset": 2048, 00:10:19.349 "data_size": 63488 00:10:19.349 } 00:10:19.349 ] 00:10:19.349 }' 00:10:19.349 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.349 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.607 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.607 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.607 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 [2024-10-08 16:18:12.974947] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.866 16:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.866 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.866 "name": "Existed_Raid", 00:10:19.866 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:19.866 "strip_size_kb": 64, 00:10:19.866 "state": "configuring", 00:10:19.866 "raid_level": "raid0", 00:10:19.866 "superblock": true, 00:10:19.866 "num_base_bdevs": 3, 00:10:19.866 "num_base_bdevs_discovered": 1, 00:10:19.866 "num_base_bdevs_operational": 3, 00:10:19.866 "base_bdevs_list": [ 00:10:19.866 { 00:10:19.866 "name": "BaseBdev1", 00:10:19.866 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:19.866 "is_configured": true, 00:10:19.866 "data_offset": 2048, 00:10:19.866 "data_size": 63488 00:10:19.866 }, 00:10:19.867 { 00:10:19.867 "name": null, 00:10:19.867 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:19.867 "is_configured": false, 00:10:19.867 "data_offset": 0, 00:10:19.867 "data_size": 63488 00:10:19.867 }, 00:10:19.867 { 00:10:19.867 "name": null, 00:10:19.867 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:19.867 "is_configured": false, 00:10:19.867 "data_offset": 0, 00:10:19.867 "data_size": 63488 00:10:19.867 } 00:10:19.867 ] 00:10:19.867 }' 00:10:19.867 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.867 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.433 [2024-10-08 16:18:13.563094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.433 "name": "Existed_Raid", 00:10:20.433 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:20.433 "strip_size_kb": 64, 00:10:20.433 "state": "configuring", 00:10:20.433 "raid_level": "raid0", 00:10:20.433 "superblock": true, 00:10:20.433 "num_base_bdevs": 3, 00:10:20.433 "num_base_bdevs_discovered": 2, 00:10:20.433 "num_base_bdevs_operational": 3, 00:10:20.433 "base_bdevs_list": [ 00:10:20.433 { 00:10:20.433 "name": "BaseBdev1", 00:10:20.433 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:20.433 "is_configured": true, 00:10:20.433 "data_offset": 2048, 00:10:20.433 "data_size": 63488 00:10:20.433 }, 00:10:20.433 { 00:10:20.433 "name": null, 00:10:20.433 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:20.433 "is_configured": false, 00:10:20.433 "data_offset": 0, 00:10:20.433 "data_size": 63488 00:10:20.433 }, 00:10:20.433 { 00:10:20.433 "name": "BaseBdev3", 00:10:20.433 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:20.433 "is_configured": true, 00:10:20.433 "data_offset": 2048, 00:10:20.433 "data_size": 63488 00:10:20.433 } 00:10:20.433 ] 00:10:20.433 }' 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.433 16:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.005 [2024-10-08 16:18:14.151387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.005 "name": "Existed_Raid", 00:10:21.005 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:21.005 "strip_size_kb": 64, 00:10:21.005 "state": "configuring", 00:10:21.005 "raid_level": "raid0", 00:10:21.005 "superblock": true, 00:10:21.005 "num_base_bdevs": 3, 00:10:21.005 "num_base_bdevs_discovered": 1, 00:10:21.005 "num_base_bdevs_operational": 3, 00:10:21.005 "base_bdevs_list": [ 00:10:21.005 { 00:10:21.005 "name": null, 00:10:21.005 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:21.005 "is_configured": false, 00:10:21.005 "data_offset": 0, 00:10:21.005 "data_size": 63488 00:10:21.005 }, 00:10:21.005 { 00:10:21.005 "name": null, 00:10:21.005 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:21.005 "is_configured": false, 00:10:21.005 "data_offset": 0, 00:10:21.005 "data_size": 63488 00:10:21.005 }, 00:10:21.005 { 00:10:21.005 "name": "BaseBdev3", 00:10:21.005 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:21.005 "is_configured": true, 00:10:21.005 "data_offset": 2048, 00:10:21.005 "data_size": 63488 00:10:21.005 } 00:10:21.005 ] 00:10:21.005 }' 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.005 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.573 [2024-10-08 16:18:14.813021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.573 "name": "Existed_Raid", 00:10:21.573 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:21.573 "strip_size_kb": 64, 00:10:21.573 "state": "configuring", 00:10:21.573 "raid_level": "raid0", 00:10:21.573 "superblock": true, 00:10:21.573 "num_base_bdevs": 3, 00:10:21.573 "num_base_bdevs_discovered": 2, 00:10:21.573 "num_base_bdevs_operational": 3, 00:10:21.573 "base_bdevs_list": [ 00:10:21.573 { 00:10:21.573 "name": null, 00:10:21.573 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:21.573 "is_configured": false, 00:10:21.573 "data_offset": 0, 00:10:21.573 "data_size": 63488 00:10:21.573 }, 00:10:21.573 { 00:10:21.573 "name": "BaseBdev2", 00:10:21.573 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:21.573 "is_configured": true, 00:10:21.573 "data_offset": 2048, 00:10:21.573 "data_size": 63488 00:10:21.573 }, 00:10:21.573 { 00:10:21.573 "name": "BaseBdev3", 00:10:21.573 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:21.573 "is_configured": true, 00:10:21.573 "data_offset": 2048, 00:10:21.573 "data_size": 63488 00:10:21.573 } 00:10:21.573 ] 00:10:21.573 }' 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.573 16:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1d1521df-282e-414b-974d-d42841bb83cb 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.140 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.398 [2024-10-08 16:18:15.488814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:22.398 [2024-10-08 16:18:15.489113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.398 [2024-10-08 16:18:15.489140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.399 [2024-10-08 16:18:15.489460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:22.399 NewBaseBdev 00:10:22.399 [2024-10-08 16:18:15.489669] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.399 [2024-10-08 16:18:15.489688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:22.399 [2024-10-08 16:18:15.489855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.399 [ 00:10:22.399 { 00:10:22.399 "name": "NewBaseBdev", 00:10:22.399 "aliases": [ 00:10:22.399 "1d1521df-282e-414b-974d-d42841bb83cb" 00:10:22.399 ], 00:10:22.399 "product_name": "Malloc disk", 00:10:22.399 "block_size": 512, 00:10:22.399 "num_blocks": 65536, 00:10:22.399 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:22.399 "assigned_rate_limits": { 00:10:22.399 "rw_ios_per_sec": 0, 00:10:22.399 "rw_mbytes_per_sec": 0, 00:10:22.399 "r_mbytes_per_sec": 0, 00:10:22.399 "w_mbytes_per_sec": 0 00:10:22.399 }, 00:10:22.399 "claimed": true, 00:10:22.399 "claim_type": "exclusive_write", 00:10:22.399 "zoned": false, 00:10:22.399 "supported_io_types": { 00:10:22.399 "read": true, 00:10:22.399 "write": true, 00:10:22.399 "unmap": true, 00:10:22.399 "flush": true, 00:10:22.399 "reset": true, 00:10:22.399 "nvme_admin": false, 00:10:22.399 "nvme_io": false, 00:10:22.399 "nvme_io_md": false, 00:10:22.399 "write_zeroes": true, 00:10:22.399 "zcopy": true, 00:10:22.399 "get_zone_info": false, 00:10:22.399 "zone_management": false, 00:10:22.399 "zone_append": false, 00:10:22.399 "compare": false, 00:10:22.399 "compare_and_write": false, 00:10:22.399 "abort": true, 00:10:22.399 "seek_hole": false, 00:10:22.399 "seek_data": false, 00:10:22.399 "copy": true, 00:10:22.399 "nvme_iov_md": false 00:10:22.399 }, 00:10:22.399 "memory_domains": [ 00:10:22.399 { 00:10:22.399 "dma_device_id": "system", 00:10:22.399 "dma_device_type": 1 00:10:22.399 }, 00:10:22.399 { 00:10:22.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.399 "dma_device_type": 2 00:10:22.399 } 00:10:22.399 ], 00:10:22.399 "driver_specific": {} 00:10:22.399 } 00:10:22.399 ] 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.399 "name": "Existed_Raid", 00:10:22.399 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:22.399 "strip_size_kb": 64, 00:10:22.399 "state": "online", 00:10:22.399 "raid_level": "raid0", 00:10:22.399 "superblock": true, 00:10:22.399 "num_base_bdevs": 3, 00:10:22.399 "num_base_bdevs_discovered": 3, 00:10:22.399 "num_base_bdevs_operational": 3, 00:10:22.399 "base_bdevs_list": [ 00:10:22.399 { 00:10:22.399 "name": "NewBaseBdev", 00:10:22.399 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:22.399 "is_configured": true, 00:10:22.399 "data_offset": 2048, 00:10:22.399 "data_size": 63488 00:10:22.399 }, 00:10:22.399 { 00:10:22.399 "name": "BaseBdev2", 00:10:22.399 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:22.399 "is_configured": true, 00:10:22.399 "data_offset": 2048, 00:10:22.399 "data_size": 63488 00:10:22.399 }, 00:10:22.399 { 00:10:22.399 "name": "BaseBdev3", 00:10:22.399 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:22.399 "is_configured": true, 00:10:22.399 "data_offset": 2048, 00:10:22.399 "data_size": 63488 00:10:22.399 } 00:10:22.399 ] 00:10:22.399 }' 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.399 16:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.977 [2024-10-08 16:18:16.097461] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.977 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.978 "name": "Existed_Raid", 00:10:22.978 "aliases": [ 00:10:22.978 "8cd1d27a-2b4f-4de1-951d-da06162ba9f6" 00:10:22.978 ], 00:10:22.978 "product_name": "Raid Volume", 00:10:22.978 "block_size": 512, 00:10:22.978 "num_blocks": 190464, 00:10:22.978 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:22.978 "assigned_rate_limits": { 00:10:22.978 "rw_ios_per_sec": 0, 00:10:22.978 "rw_mbytes_per_sec": 0, 00:10:22.978 "r_mbytes_per_sec": 0, 00:10:22.978 "w_mbytes_per_sec": 0 00:10:22.978 }, 00:10:22.978 "claimed": false, 00:10:22.978 "zoned": false, 00:10:22.978 "supported_io_types": { 00:10:22.978 "read": true, 00:10:22.978 "write": true, 00:10:22.978 "unmap": true, 00:10:22.978 "flush": true, 00:10:22.978 "reset": true, 00:10:22.978 "nvme_admin": false, 00:10:22.978 "nvme_io": false, 00:10:22.978 "nvme_io_md": false, 00:10:22.978 "write_zeroes": true, 00:10:22.978 "zcopy": false, 00:10:22.978 "get_zone_info": false, 00:10:22.978 "zone_management": false, 00:10:22.978 "zone_append": false, 00:10:22.978 "compare": false, 00:10:22.978 "compare_and_write": false, 00:10:22.978 "abort": false, 00:10:22.978 "seek_hole": false, 00:10:22.978 "seek_data": false, 00:10:22.978 "copy": false, 00:10:22.978 "nvme_iov_md": false 00:10:22.978 }, 00:10:22.978 "memory_domains": [ 00:10:22.978 { 00:10:22.978 "dma_device_id": "system", 00:10:22.978 "dma_device_type": 1 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.978 "dma_device_type": 2 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "dma_device_id": "system", 00:10:22.978 "dma_device_type": 1 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.978 "dma_device_type": 2 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "dma_device_id": "system", 00:10:22.978 "dma_device_type": 1 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.978 "dma_device_type": 2 00:10:22.978 } 00:10:22.978 ], 00:10:22.978 "driver_specific": { 00:10:22.978 "raid": { 00:10:22.978 "uuid": "8cd1d27a-2b4f-4de1-951d-da06162ba9f6", 00:10:22.978 "strip_size_kb": 64, 00:10:22.978 "state": "online", 00:10:22.978 "raid_level": "raid0", 00:10:22.978 "superblock": true, 00:10:22.978 "num_base_bdevs": 3, 00:10:22.978 "num_base_bdevs_discovered": 3, 00:10:22.978 "num_base_bdevs_operational": 3, 00:10:22.978 "base_bdevs_list": [ 00:10:22.978 { 00:10:22.978 "name": "NewBaseBdev", 00:10:22.978 "uuid": "1d1521df-282e-414b-974d-d42841bb83cb", 00:10:22.978 "is_configured": true, 00:10:22.978 "data_offset": 2048, 00:10:22.978 "data_size": 63488 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "name": "BaseBdev2", 00:10:22.978 "uuid": "f150c3ab-b557-4a69-8ed0-8973b1470773", 00:10:22.978 "is_configured": true, 00:10:22.978 "data_offset": 2048, 00:10:22.978 "data_size": 63488 00:10:22.978 }, 00:10:22.978 { 00:10:22.978 "name": "BaseBdev3", 00:10:22.978 "uuid": "704ac316-fbcb-4411-9486-6d567d4790d8", 00:10:22.978 "is_configured": true, 00:10:22.978 "data_offset": 2048, 00:10:22.978 "data_size": 63488 00:10:22.978 } 00:10:22.978 ] 00:10:22.978 } 00:10:22.978 } 00:10:22.978 }' 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.978 BaseBdev2 00:10:22.978 BaseBdev3' 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.978 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.246 [2024-10-08 16:18:16.429134] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.246 [2024-10-08 16:18:16.429196] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.246 [2024-10-08 16:18:16.429311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.246 [2024-10-08 16:18:16.429390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.246 [2024-10-08 16:18:16.429413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64727 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64727 ']' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64727 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64727 00:10:23.246 killing process with pid 64727 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64727' 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64727 00:10:23.246 [2024-10-08 16:18:16.467507] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.246 16:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64727 00:10:23.504 [2024-10-08 16:18:16.737790] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.878 16:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.878 00:10:24.878 real 0m12.180s 00:10:24.878 user 0m20.032s 00:10:24.878 sys 0m1.706s 00:10:24.878 16:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.878 ************************************ 00:10:24.878 END TEST raid_state_function_test_sb 00:10:24.878 ************************************ 00:10:24.878 16:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.878 16:18:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:24.878 16:18:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:24.878 16:18:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.878 16:18:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.878 ************************************ 00:10:24.878 START TEST raid_superblock_test 00:10:24.878 ************************************ 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:24.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65369 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65369 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65369 ']' 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.878 16:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.878 [2024-10-08 16:18:18.130996] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:24.878 [2024-10-08 16:18:18.131236] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65369 ] 00:10:25.138 [2024-10-08 16:18:18.307671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.399 [2024-10-08 16:18:18.599919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.658 [2024-10-08 16:18:18.806205] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.658 [2024-10-08 16:18:18.806259] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:25.917 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 malloc1 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 [2024-10-08 16:18:19.096210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.918 [2024-10-08 16:18:19.096609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.918 [2024-10-08 16:18:19.096658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:25.918 [2024-10-08 16:18:19.096679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.918 [2024-10-08 16:18:19.099522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.918 [2024-10-08 16:18:19.099739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.918 pt1 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 malloc2 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 [2024-10-08 16:18:19.160419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.918 [2024-10-08 16:18:19.160531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.918 [2024-10-08 16:18:19.160571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:25.918 [2024-10-08 16:18:19.160589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.918 [2024-10-08 16:18:19.163389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.918 [2024-10-08 16:18:19.163710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.918 pt2 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 malloc3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 [2024-10-08 16:18:19.208795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.918 [2024-10-08 16:18:19.208892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.918 [2024-10-08 16:18:19.208926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:25.918 [2024-10-08 16:18:19.208942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.918 [2024-10-08 16:18:19.211660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.918 [2024-10-08 16:18:19.211727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.918 pt3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 [2024-10-08 16:18:19.216878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.918 [2024-10-08 16:18:19.219292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.918 [2024-10-08 16:18:19.219625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.918 [2024-10-08 16:18:19.219854] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:25.918 [2024-10-08 16:18:19.219879] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.918 [2024-10-08 16:18:19.220185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:25.918 [2024-10-08 16:18:19.220401] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:25.918 [2024-10-08 16:18:19.220419] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:25.918 [2024-10-08 16:18:19.220636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.918 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.177 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.177 "name": "raid_bdev1", 00:10:26.177 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:26.177 "strip_size_kb": 64, 00:10:26.177 "state": "online", 00:10:26.177 "raid_level": "raid0", 00:10:26.177 "superblock": true, 00:10:26.177 "num_base_bdevs": 3, 00:10:26.177 "num_base_bdevs_discovered": 3, 00:10:26.177 "num_base_bdevs_operational": 3, 00:10:26.177 "base_bdevs_list": [ 00:10:26.177 { 00:10:26.177 "name": "pt1", 00:10:26.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.177 "is_configured": true, 00:10:26.177 "data_offset": 2048, 00:10:26.177 "data_size": 63488 00:10:26.177 }, 00:10:26.177 { 00:10:26.177 "name": "pt2", 00:10:26.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.177 "is_configured": true, 00:10:26.177 "data_offset": 2048, 00:10:26.177 "data_size": 63488 00:10:26.177 }, 00:10:26.177 { 00:10:26.177 "name": "pt3", 00:10:26.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.177 "is_configured": true, 00:10:26.177 "data_offset": 2048, 00:10:26.177 "data_size": 63488 00:10:26.177 } 00:10:26.177 ] 00:10:26.177 }' 00:10:26.177 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.177 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.435 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.435 [2024-10-08 16:18:19.745433] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.693 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.693 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.693 "name": "raid_bdev1", 00:10:26.693 "aliases": [ 00:10:26.693 "848b1e7b-6529-421a-9124-17862ff342bc" 00:10:26.693 ], 00:10:26.693 "product_name": "Raid Volume", 00:10:26.693 "block_size": 512, 00:10:26.693 "num_blocks": 190464, 00:10:26.693 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:26.693 "assigned_rate_limits": { 00:10:26.693 "rw_ios_per_sec": 0, 00:10:26.693 "rw_mbytes_per_sec": 0, 00:10:26.693 "r_mbytes_per_sec": 0, 00:10:26.693 "w_mbytes_per_sec": 0 00:10:26.693 }, 00:10:26.693 "claimed": false, 00:10:26.693 "zoned": false, 00:10:26.693 "supported_io_types": { 00:10:26.693 "read": true, 00:10:26.693 "write": true, 00:10:26.693 "unmap": true, 00:10:26.693 "flush": true, 00:10:26.693 "reset": true, 00:10:26.693 "nvme_admin": false, 00:10:26.693 "nvme_io": false, 00:10:26.693 "nvme_io_md": false, 00:10:26.693 "write_zeroes": true, 00:10:26.693 "zcopy": false, 00:10:26.693 "get_zone_info": false, 00:10:26.693 "zone_management": false, 00:10:26.693 "zone_append": false, 00:10:26.693 "compare": false, 00:10:26.693 "compare_and_write": false, 00:10:26.693 "abort": false, 00:10:26.693 "seek_hole": false, 00:10:26.693 "seek_data": false, 00:10:26.693 "copy": false, 00:10:26.693 "nvme_iov_md": false 00:10:26.693 }, 00:10:26.693 "memory_domains": [ 00:10:26.693 { 00:10:26.693 "dma_device_id": "system", 00:10:26.693 "dma_device_type": 1 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.693 "dma_device_type": 2 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "dma_device_id": "system", 00:10:26.693 "dma_device_type": 1 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.693 "dma_device_type": 2 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "dma_device_id": "system", 00:10:26.693 "dma_device_type": 1 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.693 "dma_device_type": 2 00:10:26.693 } 00:10:26.693 ], 00:10:26.693 "driver_specific": { 00:10:26.693 "raid": { 00:10:26.693 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:26.693 "strip_size_kb": 64, 00:10:26.693 "state": "online", 00:10:26.693 "raid_level": "raid0", 00:10:26.693 "superblock": true, 00:10:26.693 "num_base_bdevs": 3, 00:10:26.693 "num_base_bdevs_discovered": 3, 00:10:26.693 "num_base_bdevs_operational": 3, 00:10:26.693 "base_bdevs_list": [ 00:10:26.693 { 00:10:26.693 "name": "pt1", 00:10:26.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.693 "is_configured": true, 00:10:26.693 "data_offset": 2048, 00:10:26.693 "data_size": 63488 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "name": "pt2", 00:10:26.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.693 "is_configured": true, 00:10:26.693 "data_offset": 2048, 00:10:26.693 "data_size": 63488 00:10:26.693 }, 00:10:26.693 { 00:10:26.693 "name": "pt3", 00:10:26.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.693 "is_configured": true, 00:10:26.693 "data_offset": 2048, 00:10:26.693 "data_size": 63488 00:10:26.693 } 00:10:26.693 ] 00:10:26.693 } 00:10:26.693 } 00:10:26.693 }' 00:10:26.693 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.694 pt2 00:10:26.694 pt3' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.694 16:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.694 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.694 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.694 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.694 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 [2024-10-08 16:18:20.061409] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=848b1e7b-6529-421a-9124-17862ff342bc 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 848b1e7b-6529-421a-9124-17862ff342bc ']' 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 [2024-10-08 16:18:20.113063] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.953 [2024-10-08 16:18:20.113231] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.953 [2024-10-08 16:18:20.113432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.953 [2024-10-08 16:18:20.113633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.953 [2024-10-08 16:18:20.113783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.953 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.954 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.212 [2024-10-08 16:18:20.277190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:27.212 [2024-10-08 16:18:20.279975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:27.212 [2024-10-08 16:18:20.280165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:27.212 [2024-10-08 16:18:20.280363] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:27.212 [2024-10-08 16:18:20.280603] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:27.212 [2024-10-08 16:18:20.280807] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:27.212 [2024-10-08 16:18:20.281051] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.212 [2024-10-08 16:18:20.281227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:27.212 request: 00:10:27.212 { 00:10:27.212 "name": "raid_bdev1", 00:10:27.212 "raid_level": "raid0", 00:10:27.212 "base_bdevs": [ 00:10:27.212 "malloc1", 00:10:27.212 "malloc2", 00:10:27.212 "malloc3" 00:10:27.212 ], 00:10:27.212 "strip_size_kb": 64, 00:10:27.212 "superblock": false, 00:10:27.212 "method": "bdev_raid_create", 00:10:27.212 "req_id": 1 00:10:27.212 } 00:10:27.212 Got JSON-RPC error response 00:10:27.212 response: 00:10:27.212 { 00:10:27.212 "code": -17, 00:10:27.212 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:27.212 } 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.212 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 [2024-10-08 16:18:20.345604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:27.213 [2024-10-08 16:18:20.345829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.213 [2024-10-08 16:18:20.345875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:27.213 [2024-10-08 16:18:20.345893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.213 [2024-10-08 16:18:20.348900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.213 [2024-10-08 16:18:20.349065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:27.213 [2024-10-08 16:18:20.349197] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:27.213 [2024-10-08 16:18:20.349271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:27.213 pt1 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.213 "name": "raid_bdev1", 00:10:27.213 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:27.213 "strip_size_kb": 64, 00:10:27.213 "state": "configuring", 00:10:27.213 "raid_level": "raid0", 00:10:27.213 "superblock": true, 00:10:27.213 "num_base_bdevs": 3, 00:10:27.213 "num_base_bdevs_discovered": 1, 00:10:27.213 "num_base_bdevs_operational": 3, 00:10:27.213 "base_bdevs_list": [ 00:10:27.213 { 00:10:27.213 "name": "pt1", 00:10:27.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.213 "is_configured": true, 00:10:27.213 "data_offset": 2048, 00:10:27.213 "data_size": 63488 00:10:27.213 }, 00:10:27.213 { 00:10:27.213 "name": null, 00:10:27.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.213 "is_configured": false, 00:10:27.213 "data_offset": 2048, 00:10:27.213 "data_size": 63488 00:10:27.213 }, 00:10:27.213 { 00:10:27.213 "name": null, 00:10:27.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.213 "is_configured": false, 00:10:27.213 "data_offset": 2048, 00:10:27.213 "data_size": 63488 00:10:27.213 } 00:10:27.213 ] 00:10:27.213 }' 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.213 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.785 [2024-10-08 16:18:20.885720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.785 [2024-10-08 16:18:20.885822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.785 [2024-10-08 16:18:20.885873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:27.785 [2024-10-08 16:18:20.885889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.785 [2024-10-08 16:18:20.886452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.785 [2024-10-08 16:18:20.886485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.785 [2024-10-08 16:18:20.886629] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.785 [2024-10-08 16:18:20.886671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.785 pt2 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.785 [2024-10-08 16:18:20.893732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.785 "name": "raid_bdev1", 00:10:27.785 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:27.785 "strip_size_kb": 64, 00:10:27.785 "state": "configuring", 00:10:27.785 "raid_level": "raid0", 00:10:27.785 "superblock": true, 00:10:27.785 "num_base_bdevs": 3, 00:10:27.785 "num_base_bdevs_discovered": 1, 00:10:27.785 "num_base_bdevs_operational": 3, 00:10:27.785 "base_bdevs_list": [ 00:10:27.785 { 00:10:27.785 "name": "pt1", 00:10:27.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.785 "is_configured": true, 00:10:27.785 "data_offset": 2048, 00:10:27.785 "data_size": 63488 00:10:27.785 }, 00:10:27.785 { 00:10:27.785 "name": null, 00:10:27.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.785 "is_configured": false, 00:10:27.785 "data_offset": 0, 00:10:27.785 "data_size": 63488 00:10:27.785 }, 00:10:27.785 { 00:10:27.785 "name": null, 00:10:27.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.785 "is_configured": false, 00:10:27.785 "data_offset": 2048, 00:10:27.785 "data_size": 63488 00:10:27.785 } 00:10:27.785 ] 00:10:27.785 }' 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.785 16:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.351 [2024-10-08 16:18:21.421882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.351 [2024-10-08 16:18:21.422000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.351 [2024-10-08 16:18:21.422037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:28.351 [2024-10-08 16:18:21.422058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.351 [2024-10-08 16:18:21.422770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.351 [2024-10-08 16:18:21.422805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.351 [2024-10-08 16:18:21.422934] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:28.351 [2024-10-08 16:18:21.422994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.351 pt2 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.351 [2024-10-08 16:18:21.429868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.351 [2024-10-08 16:18:21.429933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.351 [2024-10-08 16:18:21.429958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:28.351 [2024-10-08 16:18:21.429976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.351 [2024-10-08 16:18:21.430500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.351 [2024-10-08 16:18:21.430558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.351 [2024-10-08 16:18:21.430658] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:28.351 [2024-10-08 16:18:21.430697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.351 [2024-10-08 16:18:21.430865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.351 [2024-10-08 16:18:21.430888] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:28.351 [2024-10-08 16:18:21.431245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.351 [2024-10-08 16:18:21.431448] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.351 [2024-10-08 16:18:21.431463] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:28.351 [2024-10-08 16:18:21.431671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.351 pt3 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.351 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.352 "name": "raid_bdev1", 00:10:28.352 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:28.352 "strip_size_kb": 64, 00:10:28.352 "state": "online", 00:10:28.352 "raid_level": "raid0", 00:10:28.352 "superblock": true, 00:10:28.352 "num_base_bdevs": 3, 00:10:28.352 "num_base_bdevs_discovered": 3, 00:10:28.352 "num_base_bdevs_operational": 3, 00:10:28.352 "base_bdevs_list": [ 00:10:28.352 { 00:10:28.352 "name": "pt1", 00:10:28.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.352 "is_configured": true, 00:10:28.352 "data_offset": 2048, 00:10:28.352 "data_size": 63488 00:10:28.352 }, 00:10:28.352 { 00:10:28.352 "name": "pt2", 00:10:28.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.352 "is_configured": true, 00:10:28.352 "data_offset": 2048, 00:10:28.352 "data_size": 63488 00:10:28.352 }, 00:10:28.352 { 00:10:28.352 "name": "pt3", 00:10:28.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.352 "is_configured": true, 00:10:28.352 "data_offset": 2048, 00:10:28.352 "data_size": 63488 00:10:28.352 } 00:10:28.352 ] 00:10:28.352 }' 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.352 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.610 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:28.610 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:28.610 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.610 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.610 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.610 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.868 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.868 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.868 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.868 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.868 [2024-10-08 16:18:21.938582] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.868 16:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.868 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.868 "name": "raid_bdev1", 00:10:28.868 "aliases": [ 00:10:28.868 "848b1e7b-6529-421a-9124-17862ff342bc" 00:10:28.868 ], 00:10:28.868 "product_name": "Raid Volume", 00:10:28.868 "block_size": 512, 00:10:28.868 "num_blocks": 190464, 00:10:28.868 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:28.868 "assigned_rate_limits": { 00:10:28.868 "rw_ios_per_sec": 0, 00:10:28.868 "rw_mbytes_per_sec": 0, 00:10:28.868 "r_mbytes_per_sec": 0, 00:10:28.868 "w_mbytes_per_sec": 0 00:10:28.868 }, 00:10:28.868 "claimed": false, 00:10:28.868 "zoned": false, 00:10:28.868 "supported_io_types": { 00:10:28.868 "read": true, 00:10:28.868 "write": true, 00:10:28.868 "unmap": true, 00:10:28.868 "flush": true, 00:10:28.868 "reset": true, 00:10:28.868 "nvme_admin": false, 00:10:28.868 "nvme_io": false, 00:10:28.868 "nvme_io_md": false, 00:10:28.868 "write_zeroes": true, 00:10:28.868 "zcopy": false, 00:10:28.868 "get_zone_info": false, 00:10:28.868 "zone_management": false, 00:10:28.868 "zone_append": false, 00:10:28.868 "compare": false, 00:10:28.868 "compare_and_write": false, 00:10:28.868 "abort": false, 00:10:28.868 "seek_hole": false, 00:10:28.868 "seek_data": false, 00:10:28.868 "copy": false, 00:10:28.868 "nvme_iov_md": false 00:10:28.868 }, 00:10:28.868 "memory_domains": [ 00:10:28.868 { 00:10:28.868 "dma_device_id": "system", 00:10:28.868 "dma_device_type": 1 00:10:28.868 }, 00:10:28.869 { 00:10:28.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.869 "dma_device_type": 2 00:10:28.869 }, 00:10:28.869 { 00:10:28.869 "dma_device_id": "system", 00:10:28.869 "dma_device_type": 1 00:10:28.869 }, 00:10:28.869 { 00:10:28.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.869 "dma_device_type": 2 00:10:28.869 }, 00:10:28.869 { 00:10:28.869 "dma_device_id": "system", 00:10:28.869 "dma_device_type": 1 00:10:28.869 }, 00:10:28.869 { 00:10:28.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.869 "dma_device_type": 2 00:10:28.869 } 00:10:28.869 ], 00:10:28.869 "driver_specific": { 00:10:28.869 "raid": { 00:10:28.869 "uuid": "848b1e7b-6529-421a-9124-17862ff342bc", 00:10:28.869 "strip_size_kb": 64, 00:10:28.869 "state": "online", 00:10:28.869 "raid_level": "raid0", 00:10:28.869 "superblock": true, 00:10:28.869 "num_base_bdevs": 3, 00:10:28.869 "num_base_bdevs_discovered": 3, 00:10:28.869 "num_base_bdevs_operational": 3, 00:10:28.869 "base_bdevs_list": [ 00:10:28.869 { 00:10:28.869 "name": "pt1", 00:10:28.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.869 "is_configured": true, 00:10:28.869 "data_offset": 2048, 00:10:28.869 "data_size": 63488 00:10:28.869 }, 00:10:28.869 { 00:10:28.869 "name": "pt2", 00:10:28.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.869 "is_configured": true, 00:10:28.869 "data_offset": 2048, 00:10:28.869 "data_size": 63488 00:10:28.869 }, 00:10:28.869 { 00:10:28.869 "name": "pt3", 00:10:28.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.869 "is_configured": true, 00:10:28.869 "data_offset": 2048, 00:10:28.869 "data_size": 63488 00:10:28.869 } 00:10:28.869 ] 00:10:28.869 } 00:10:28.869 } 00:10:28.869 }' 00:10:28.869 16:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:28.869 pt2 00:10:28.869 pt3' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.869 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:29.127 [2024-10-08 16:18:22.246439] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 848b1e7b-6529-421a-9124-17862ff342bc '!=' 848b1e7b-6529-421a-9124-17862ff342bc ']' 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65369 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65369 ']' 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65369 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65369 00:10:29.127 killing process with pid 65369 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65369' 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65369 00:10:29.127 [2024-10-08 16:18:22.324818] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.127 16:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65369 00:10:29.127 [2024-10-08 16:18:22.324955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.127 [2024-10-08 16:18:22.325037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.127 [2024-10-08 16:18:22.325056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:29.447 [2024-10-08 16:18:22.601963] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.821 16:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:30.821 00:10:30.821 real 0m5.866s 00:10:30.821 user 0m8.642s 00:10:30.821 sys 0m0.871s 00:10:30.821 16:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.821 ************************************ 00:10:30.821 END TEST raid_superblock_test 00:10:30.821 ************************************ 00:10:30.821 16:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 16:18:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:30.821 16:18:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.821 16:18:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.821 16:18:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 ************************************ 00:10:30.821 START TEST raid_read_error_test 00:10:30.821 ************************************ 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wnB7iQALh5 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65636 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65636 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65636 ']' 00:10:30.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.821 16:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 [2024-10-08 16:18:24.060496] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:30.821 [2024-10-08 16:18:24.060717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65636 ] 00:10:31.079 [2024-10-08 16:18:24.241924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.338 [2024-10-08 16:18:24.502120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.596 [2024-10-08 16:18:24.708294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.596 [2024-10-08 16:18:24.708391] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.854 BaseBdev1_malloc 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.854 true 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.854 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.854 [2024-10-08 16:18:25.104054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.854 [2024-10-08 16:18:25.104397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.854 [2024-10-08 16:18:25.104436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:31.854 [2024-10-08 16:18:25.104457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.854 [2024-10-08 16:18:25.107380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.855 [2024-10-08 16:18:25.107611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.855 BaseBdev1 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.855 BaseBdev2_malloc 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.855 true 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.855 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.113 [2024-10-08 16:18:25.178348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:32.113 [2024-10-08 16:18:25.178435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.113 [2024-10-08 16:18:25.178461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:32.113 [2024-10-08 16:18:25.178479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.113 [2024-10-08 16:18:25.181261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.113 [2024-10-08 16:18:25.181315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:32.113 BaseBdev2 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.113 BaseBdev3_malloc 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.113 true 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.113 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.113 [2024-10-08 16:18:25.238865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:32.114 [2024-10-08 16:18:25.238955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.114 [2024-10-08 16:18:25.238983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.114 [2024-10-08 16:18:25.239002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.114 [2024-10-08 16:18:25.241787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.114 [2024-10-08 16:18:25.241837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:32.114 BaseBdev3 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.114 [2024-10-08 16:18:25.246957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.114 [2024-10-08 16:18:25.249376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.114 [2024-10-08 16:18:25.249487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.114 [2024-10-08 16:18:25.249781] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.114 [2024-10-08 16:18:25.249801] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:32.114 [2024-10-08 16:18:25.250120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:32.114 [2024-10-08 16:18:25.250338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.114 [2024-10-08 16:18:25.250359] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:32.114 [2024-10-08 16:18:25.250562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.114 "name": "raid_bdev1", 00:10:32.114 "uuid": "b0967f2d-5903-4a92-b887-3fbbbf5dd236", 00:10:32.114 "strip_size_kb": 64, 00:10:32.114 "state": "online", 00:10:32.114 "raid_level": "raid0", 00:10:32.114 "superblock": true, 00:10:32.114 "num_base_bdevs": 3, 00:10:32.114 "num_base_bdevs_discovered": 3, 00:10:32.114 "num_base_bdevs_operational": 3, 00:10:32.114 "base_bdevs_list": [ 00:10:32.114 { 00:10:32.114 "name": "BaseBdev1", 00:10:32.114 "uuid": "e30be4a3-dc65-51c5-b3ba-32d058c6f72d", 00:10:32.114 "is_configured": true, 00:10:32.114 "data_offset": 2048, 00:10:32.114 "data_size": 63488 00:10:32.114 }, 00:10:32.114 { 00:10:32.114 "name": "BaseBdev2", 00:10:32.114 "uuid": "16be0815-c3b5-5fdb-b208-9556bd93460b", 00:10:32.114 "is_configured": true, 00:10:32.114 "data_offset": 2048, 00:10:32.114 "data_size": 63488 00:10:32.114 }, 00:10:32.114 { 00:10:32.114 "name": "BaseBdev3", 00:10:32.114 "uuid": "eb2c7ce4-914d-54e1-9b96-790b5e08690c", 00:10:32.114 "is_configured": true, 00:10:32.114 "data_offset": 2048, 00:10:32.114 "data_size": 63488 00:10:32.114 } 00:10:32.114 ] 00:10:32.114 }' 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.114 16:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.680 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:32.680 16:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:32.680 [2024-10-08 16:18:25.993026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.616 "name": "raid_bdev1", 00:10:33.616 "uuid": "b0967f2d-5903-4a92-b887-3fbbbf5dd236", 00:10:33.616 "strip_size_kb": 64, 00:10:33.616 "state": "online", 00:10:33.616 "raid_level": "raid0", 00:10:33.616 "superblock": true, 00:10:33.616 "num_base_bdevs": 3, 00:10:33.616 "num_base_bdevs_discovered": 3, 00:10:33.616 "num_base_bdevs_operational": 3, 00:10:33.616 "base_bdevs_list": [ 00:10:33.616 { 00:10:33.616 "name": "BaseBdev1", 00:10:33.616 "uuid": "e30be4a3-dc65-51c5-b3ba-32d058c6f72d", 00:10:33.616 "is_configured": true, 00:10:33.616 "data_offset": 2048, 00:10:33.616 "data_size": 63488 00:10:33.616 }, 00:10:33.616 { 00:10:33.616 "name": "BaseBdev2", 00:10:33.616 "uuid": "16be0815-c3b5-5fdb-b208-9556bd93460b", 00:10:33.616 "is_configured": true, 00:10:33.616 "data_offset": 2048, 00:10:33.616 "data_size": 63488 00:10:33.616 }, 00:10:33.616 { 00:10:33.616 "name": "BaseBdev3", 00:10:33.616 "uuid": "eb2c7ce4-914d-54e1-9b96-790b5e08690c", 00:10:33.616 "is_configured": true, 00:10:33.616 "data_offset": 2048, 00:10:33.616 "data_size": 63488 00:10:33.616 } 00:10:33.616 ] 00:10:33.616 }' 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.616 16:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.182 [2024-10-08 16:18:27.375740] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.182 [2024-10-08 16:18:27.375914] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.182 [2024-10-08 16:18:27.379529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.182 [2024-10-08 16:18:27.379753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.182 [2024-10-08 16:18:27.379831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.182 [2024-10-08 16:18:27.379848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:34.182 { 00:10:34.182 "results": [ 00:10:34.182 { 00:10:34.182 "job": "raid_bdev1", 00:10:34.182 "core_mask": "0x1", 00:10:34.182 "workload": "randrw", 00:10:34.182 "percentage": 50, 00:10:34.182 "status": "finished", 00:10:34.182 "queue_depth": 1, 00:10:34.182 "io_size": 131072, 00:10:34.182 "runtime": 1.379172, 00:10:34.182 "iops": 9689.14682142619, 00:10:34.182 "mibps": 1211.1433526782737, 00:10:34.182 "io_failed": 1, 00:10:34.182 "io_timeout": 0, 00:10:34.182 "avg_latency_us": 144.66269108323584, 00:10:34.182 "min_latency_us": 43.28727272727273, 00:10:34.182 "max_latency_us": 1839.4763636363637 00:10:34.182 } 00:10:34.182 ], 00:10:34.182 "core_count": 1 00:10:34.182 } 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65636 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65636 ']' 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65636 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65636 00:10:34.182 killing process with pid 65636 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65636' 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65636 00:10:34.182 [2024-10-08 16:18:27.415711] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.182 16:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65636 00:10:34.441 [2024-10-08 16:18:27.646636] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wnB7iQALh5 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:35.815 ************************************ 00:10:35.815 END TEST raid_read_error_test 00:10:35.815 ************************************ 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:35.815 00:10:35.815 real 0m5.100s 00:10:35.815 user 0m6.307s 00:10:35.815 sys 0m0.646s 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.815 16:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.815 16:18:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:35.815 16:18:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:35.815 16:18:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.815 16:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.815 ************************************ 00:10:35.815 START TEST raid_write_error_test 00:10:35.815 ************************************ 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6rAz93PpDj 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65788 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65788 00:10:35.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65788 ']' 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.815 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.816 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.816 16:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.074 [2024-10-08 16:18:29.217876] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:36.074 [2024-10-08 16:18:29.218074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65788 ] 00:10:36.074 [2024-10-08 16:18:29.392946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.641 [2024-10-08 16:18:29.668578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.641 [2024-10-08 16:18:29.893746] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.641 [2024-10-08 16:18:29.893848] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 BaseBdev1_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 true 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 [2024-10-08 16:18:30.292195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:37.209 [2024-10-08 16:18:30.292466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.209 [2024-10-08 16:18:30.292506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:37.209 [2024-10-08 16:18:30.292543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.209 [2024-10-08 16:18:30.295762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.209 [2024-10-08 16:18:30.295813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:37.209 BaseBdev1 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 BaseBdev2_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 true 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 [2024-10-08 16:18:30.374204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:37.209 [2024-10-08 16:18:30.374411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.209 [2024-10-08 16:18:30.374485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:37.209 [2024-10-08 16:18:30.374539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.209 [2024-10-08 16:18:30.377661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.209 [2024-10-08 16:18:30.377728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:37.209 BaseBdev2 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 BaseBdev3_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 true 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 [2024-10-08 16:18:30.448555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:37.209 [2024-10-08 16:18:30.448656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.209 [2024-10-08 16:18:30.448685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:37.209 [2024-10-08 16:18:30.448704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.209 [2024-10-08 16:18:30.451859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.209 [2024-10-08 16:18:30.451910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:37.209 BaseBdev3 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 [2024-10-08 16:18:30.460823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.209 [2024-10-08 16:18:30.463703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.209 [2024-10-08 16:18:30.463942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.209 [2024-10-08 16:18:30.464277] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.209 [2024-10-08 16:18:30.464408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:37.209 [2024-10-08 16:18:30.464859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:37.209 [2024-10-08 16:18:30.465210] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.209 [2024-10-08 16:18:30.465344] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:37.209 [2024-10-08 16:18:30.465751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.209 "name": "raid_bdev1", 00:10:37.209 "uuid": "f514f478-15a5-4b2d-9913-9c900bec5a81", 00:10:37.209 "strip_size_kb": 64, 00:10:37.209 "state": "online", 00:10:37.209 "raid_level": "raid0", 00:10:37.209 "superblock": true, 00:10:37.209 "num_base_bdevs": 3, 00:10:37.209 "num_base_bdevs_discovered": 3, 00:10:37.209 "num_base_bdevs_operational": 3, 00:10:37.209 "base_bdevs_list": [ 00:10:37.209 { 00:10:37.209 "name": "BaseBdev1", 00:10:37.209 "uuid": "769857a9-f06b-530e-b70d-122e371e49ff", 00:10:37.209 "is_configured": true, 00:10:37.209 "data_offset": 2048, 00:10:37.209 "data_size": 63488 00:10:37.209 }, 00:10:37.209 { 00:10:37.209 "name": "BaseBdev2", 00:10:37.209 "uuid": "0c9c365c-3f91-51c7-97d9-ace536c9af1d", 00:10:37.209 "is_configured": true, 00:10:37.209 "data_offset": 2048, 00:10:37.209 "data_size": 63488 00:10:37.209 }, 00:10:37.209 { 00:10:37.209 "name": "BaseBdev3", 00:10:37.209 "uuid": "daa50fa6-ffdf-55e9-9f14-b9015f4572c6", 00:10:37.209 "is_configured": true, 00:10:37.209 "data_offset": 2048, 00:10:37.209 "data_size": 63488 00:10:37.209 } 00:10:37.209 ] 00:10:37.209 }' 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.209 16:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.777 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:37.777 16:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:38.036 [2024-10-08 16:18:31.119430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:38.972 16:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:38.972 16:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.972 16:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.972 "name": "raid_bdev1", 00:10:38.972 "uuid": "f514f478-15a5-4b2d-9913-9c900bec5a81", 00:10:38.972 "strip_size_kb": 64, 00:10:38.972 "state": "online", 00:10:38.972 "raid_level": "raid0", 00:10:38.972 "superblock": true, 00:10:38.972 "num_base_bdevs": 3, 00:10:38.972 "num_base_bdevs_discovered": 3, 00:10:38.972 "num_base_bdevs_operational": 3, 00:10:38.972 "base_bdevs_list": [ 00:10:38.972 { 00:10:38.972 "name": "BaseBdev1", 00:10:38.972 "uuid": "769857a9-f06b-530e-b70d-122e371e49ff", 00:10:38.972 "is_configured": true, 00:10:38.972 "data_offset": 2048, 00:10:38.972 "data_size": 63488 00:10:38.972 }, 00:10:38.972 { 00:10:38.972 "name": "BaseBdev2", 00:10:38.972 "uuid": "0c9c365c-3f91-51c7-97d9-ace536c9af1d", 00:10:38.972 "is_configured": true, 00:10:38.972 "data_offset": 2048, 00:10:38.972 "data_size": 63488 00:10:38.972 }, 00:10:38.972 { 00:10:38.972 "name": "BaseBdev3", 00:10:38.972 "uuid": "daa50fa6-ffdf-55e9-9f14-b9015f4572c6", 00:10:38.972 "is_configured": true, 00:10:38.972 "data_offset": 2048, 00:10:38.972 "data_size": 63488 00:10:38.972 } 00:10:38.972 ] 00:10:38.972 }' 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.972 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.230 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.230 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.230 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.230 [2024-10-08 16:18:32.513990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.230 [2024-10-08 16:18:32.514164] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.230 [2024-10-08 16:18:32.517667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.230 [2024-10-08 16:18:32.517856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.230 [2024-10-08 16:18:32.517963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:10:39.230 "results": [ 00:10:39.231 { 00:10:39.231 "job": "raid_bdev1", 00:10:39.231 "core_mask": "0x1", 00:10:39.231 "workload": "randrw", 00:10:39.231 "percentage": 50, 00:10:39.231 "status": "finished", 00:10:39.231 "queue_depth": 1, 00:10:39.231 "io_size": 131072, 00:10:39.231 "runtime": 1.391627, 00:10:39.231 "iops": 9856.089311288155, 00:10:39.231 "mibps": 1232.0111639110194, 00:10:39.231 "io_failed": 1, 00:10:39.231 "io_timeout": 0, 00:10:39.231 "avg_latency_us": 143.00211204411247, 00:10:39.231 "min_latency_us": 36.305454545454545, 00:10:39.231 "max_latency_us": 1861.8181818181818 00:10:39.231 } 00:10:39.231 ], 00:10:39.231 "core_count": 1 00:10:39.231 } 00:10:39.231 ee all in destruct 00:10:39.231 [2024-10-08 16:18:32.518200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65788 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65788 ']' 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65788 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65788 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.231 killing process with pid 65788 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65788' 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65788 00:10:39.231 16:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65788 00:10:39.231 [2024-10-08 16:18:32.549972] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.490 [2024-10-08 16:18:32.773960] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6rAz93PpDj 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:40.864 ************************************ 00:10:40.864 END TEST raid_write_error_test 00:10:40.864 ************************************ 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:40.864 00:10:40.864 real 0m5.082s 00:10:40.864 user 0m6.165s 00:10:40.864 sys 0m0.670s 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.864 16:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 16:18:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:41.124 16:18:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:41.124 16:18:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:41.124 16:18:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.124 16:18:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 ************************************ 00:10:41.124 START TEST raid_state_function_test 00:10:41.124 ************************************ 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65935 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65935' 00:10:41.124 Process raid pid: 65935 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65935 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65935 ']' 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.124 16:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 [2024-10-08 16:18:34.339884] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:41.124 [2024-10-08 16:18:34.340072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.382 [2024-10-08 16:18:34.514038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.641 [2024-10-08 16:18:34.790110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.900 [2024-10-08 16:18:35.018793] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.900 [2024-10-08 16:18:35.018865] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.159 [2024-10-08 16:18:35.369332] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.159 [2024-10-08 16:18:35.369402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.159 [2024-10-08 16:18:35.369419] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.159 [2024-10-08 16:18:35.369438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.159 [2024-10-08 16:18:35.369449] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.159 [2024-10-08 16:18:35.369466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.159 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.160 "name": "Existed_Raid", 00:10:42.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.160 "strip_size_kb": 64, 00:10:42.160 "state": "configuring", 00:10:42.160 "raid_level": "concat", 00:10:42.160 "superblock": false, 00:10:42.160 "num_base_bdevs": 3, 00:10:42.160 "num_base_bdevs_discovered": 0, 00:10:42.160 "num_base_bdevs_operational": 3, 00:10:42.160 "base_bdevs_list": [ 00:10:42.160 { 00:10:42.160 "name": "BaseBdev1", 00:10:42.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.160 "is_configured": false, 00:10:42.160 "data_offset": 0, 00:10:42.160 "data_size": 0 00:10:42.160 }, 00:10:42.160 { 00:10:42.160 "name": "BaseBdev2", 00:10:42.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.160 "is_configured": false, 00:10:42.160 "data_offset": 0, 00:10:42.160 "data_size": 0 00:10:42.160 }, 00:10:42.160 { 00:10:42.160 "name": "BaseBdev3", 00:10:42.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.160 "is_configured": false, 00:10:42.160 "data_offset": 0, 00:10:42.160 "data_size": 0 00:10:42.160 } 00:10:42.160 ] 00:10:42.160 }' 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.160 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [2024-10-08 16:18:35.837343] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.727 [2024-10-08 16:18:35.837401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [2024-10-08 16:18:35.845337] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.727 [2024-10-08 16:18:35.845393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.727 [2024-10-08 16:18:35.845409] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.727 [2024-10-08 16:18:35.845425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.727 [2024-10-08 16:18:35.845435] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.727 [2024-10-08 16:18:35.845449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [2024-10-08 16:18:35.904932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.727 BaseBdev1 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [ 00:10:42.727 { 00:10:42.727 "name": "BaseBdev1", 00:10:42.727 "aliases": [ 00:10:42.727 "24afdf72-906b-4cec-988a-22fb652b1f4f" 00:10:42.727 ], 00:10:42.727 "product_name": "Malloc disk", 00:10:42.727 "block_size": 512, 00:10:42.727 "num_blocks": 65536, 00:10:42.727 "uuid": "24afdf72-906b-4cec-988a-22fb652b1f4f", 00:10:42.727 "assigned_rate_limits": { 00:10:42.727 "rw_ios_per_sec": 0, 00:10:42.727 "rw_mbytes_per_sec": 0, 00:10:42.727 "r_mbytes_per_sec": 0, 00:10:42.727 "w_mbytes_per_sec": 0 00:10:42.727 }, 00:10:42.727 "claimed": true, 00:10:42.727 "claim_type": "exclusive_write", 00:10:42.727 "zoned": false, 00:10:42.727 "supported_io_types": { 00:10:42.727 "read": true, 00:10:42.727 "write": true, 00:10:42.727 "unmap": true, 00:10:42.727 "flush": true, 00:10:42.727 "reset": true, 00:10:42.727 "nvme_admin": false, 00:10:42.727 "nvme_io": false, 00:10:42.727 "nvme_io_md": false, 00:10:42.727 "write_zeroes": true, 00:10:42.727 "zcopy": true, 00:10:42.727 "get_zone_info": false, 00:10:42.727 "zone_management": false, 00:10:42.727 "zone_append": false, 00:10:42.727 "compare": false, 00:10:42.727 "compare_and_write": false, 00:10:42.727 "abort": true, 00:10:42.727 "seek_hole": false, 00:10:42.727 "seek_data": false, 00:10:42.727 "copy": true, 00:10:42.727 "nvme_iov_md": false 00:10:42.727 }, 00:10:42.727 "memory_domains": [ 00:10:42.727 { 00:10:42.727 "dma_device_id": "system", 00:10:42.727 "dma_device_type": 1 00:10:42.727 }, 00:10:42.727 { 00:10:42.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.727 "dma_device_type": 2 00:10:42.727 } 00:10:42.727 ], 00:10:42.727 "driver_specific": {} 00:10:42.727 } 00:10:42.727 ] 00:10:42.727 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.728 "name": "Existed_Raid", 00:10:42.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.728 "strip_size_kb": 64, 00:10:42.728 "state": "configuring", 00:10:42.728 "raid_level": "concat", 00:10:42.728 "superblock": false, 00:10:42.728 "num_base_bdevs": 3, 00:10:42.728 "num_base_bdevs_discovered": 1, 00:10:42.728 "num_base_bdevs_operational": 3, 00:10:42.728 "base_bdevs_list": [ 00:10:42.728 { 00:10:42.728 "name": "BaseBdev1", 00:10:42.728 "uuid": "24afdf72-906b-4cec-988a-22fb652b1f4f", 00:10:42.728 "is_configured": true, 00:10:42.728 "data_offset": 0, 00:10:42.728 "data_size": 65536 00:10:42.728 }, 00:10:42.728 { 00:10:42.728 "name": "BaseBdev2", 00:10:42.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.728 "is_configured": false, 00:10:42.728 "data_offset": 0, 00:10:42.728 "data_size": 0 00:10:42.728 }, 00:10:42.728 { 00:10:42.728 "name": "BaseBdev3", 00:10:42.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.728 "is_configured": false, 00:10:42.728 "data_offset": 0, 00:10:42.728 "data_size": 0 00:10:42.728 } 00:10:42.728 ] 00:10:42.728 }' 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.728 16:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.295 [2024-10-08 16:18:36.433147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.295 [2024-10-08 16:18:36.433231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.295 [2024-10-08 16:18:36.441213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.295 [2024-10-08 16:18:36.443927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.295 [2024-10-08 16:18:36.443988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.295 [2024-10-08 16:18:36.444006] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.295 [2024-10-08 16:18:36.444021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.295 "name": "Existed_Raid", 00:10:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.295 "strip_size_kb": 64, 00:10:43.295 "state": "configuring", 00:10:43.295 "raid_level": "concat", 00:10:43.295 "superblock": false, 00:10:43.295 "num_base_bdevs": 3, 00:10:43.295 "num_base_bdevs_discovered": 1, 00:10:43.295 "num_base_bdevs_operational": 3, 00:10:43.295 "base_bdevs_list": [ 00:10:43.295 { 00:10:43.295 "name": "BaseBdev1", 00:10:43.295 "uuid": "24afdf72-906b-4cec-988a-22fb652b1f4f", 00:10:43.295 "is_configured": true, 00:10:43.295 "data_offset": 0, 00:10:43.295 "data_size": 65536 00:10:43.295 }, 00:10:43.295 { 00:10:43.295 "name": "BaseBdev2", 00:10:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.295 "is_configured": false, 00:10:43.295 "data_offset": 0, 00:10:43.295 "data_size": 0 00:10:43.295 }, 00:10:43.295 { 00:10:43.295 "name": "BaseBdev3", 00:10:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.295 "is_configured": false, 00:10:43.295 "data_offset": 0, 00:10:43.295 "data_size": 0 00:10:43.295 } 00:10:43.295 ] 00:10:43.295 }' 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.295 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 [2024-10-08 16:18:36.999730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.863 BaseBdev2 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:43.863 16:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 [ 00:10:43.863 { 00:10:43.863 "name": "BaseBdev2", 00:10:43.863 "aliases": [ 00:10:43.863 "541ecaaf-0ef6-4b29-ae2d-09eb7346b6ed" 00:10:43.863 ], 00:10:43.863 "product_name": "Malloc disk", 00:10:43.863 "block_size": 512, 00:10:43.863 "num_blocks": 65536, 00:10:43.863 "uuid": "541ecaaf-0ef6-4b29-ae2d-09eb7346b6ed", 00:10:43.863 "assigned_rate_limits": { 00:10:43.863 "rw_ios_per_sec": 0, 00:10:43.863 "rw_mbytes_per_sec": 0, 00:10:43.863 "r_mbytes_per_sec": 0, 00:10:43.863 "w_mbytes_per_sec": 0 00:10:43.863 }, 00:10:43.863 "claimed": true, 00:10:43.863 "claim_type": "exclusive_write", 00:10:43.863 "zoned": false, 00:10:43.863 "supported_io_types": { 00:10:43.863 "read": true, 00:10:43.863 "write": true, 00:10:43.863 "unmap": true, 00:10:43.863 "flush": true, 00:10:43.863 "reset": true, 00:10:43.863 "nvme_admin": false, 00:10:43.863 "nvme_io": false, 00:10:43.863 "nvme_io_md": false, 00:10:43.863 "write_zeroes": true, 00:10:43.863 "zcopy": true, 00:10:43.863 "get_zone_info": false, 00:10:43.863 "zone_management": false, 00:10:43.863 "zone_append": false, 00:10:43.863 "compare": false, 00:10:43.863 "compare_and_write": false, 00:10:43.863 "abort": true, 00:10:43.863 "seek_hole": false, 00:10:43.863 "seek_data": false, 00:10:43.863 "copy": true, 00:10:43.863 "nvme_iov_md": false 00:10:43.863 }, 00:10:43.863 "memory_domains": [ 00:10:43.863 { 00:10:43.863 "dma_device_id": "system", 00:10:43.863 "dma_device_type": 1 00:10:43.863 }, 00:10:43.863 { 00:10:43.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.863 "dma_device_type": 2 00:10:43.863 } 00:10:43.863 ], 00:10:43.863 "driver_specific": {} 00:10:43.863 } 00:10:43.863 ] 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.863 "name": "Existed_Raid", 00:10:43.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.863 "strip_size_kb": 64, 00:10:43.863 "state": "configuring", 00:10:43.863 "raid_level": "concat", 00:10:43.863 "superblock": false, 00:10:43.863 "num_base_bdevs": 3, 00:10:43.863 "num_base_bdevs_discovered": 2, 00:10:43.863 "num_base_bdevs_operational": 3, 00:10:43.863 "base_bdevs_list": [ 00:10:43.863 { 00:10:43.863 "name": "BaseBdev1", 00:10:43.863 "uuid": "24afdf72-906b-4cec-988a-22fb652b1f4f", 00:10:43.863 "is_configured": true, 00:10:43.863 "data_offset": 0, 00:10:43.863 "data_size": 65536 00:10:43.863 }, 00:10:43.863 { 00:10:43.863 "name": "BaseBdev2", 00:10:43.863 "uuid": "541ecaaf-0ef6-4b29-ae2d-09eb7346b6ed", 00:10:43.863 "is_configured": true, 00:10:43.863 "data_offset": 0, 00:10:43.863 "data_size": 65536 00:10:43.863 }, 00:10:43.863 { 00:10:43.863 "name": "BaseBdev3", 00:10:43.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.863 "is_configured": false, 00:10:43.863 "data_offset": 0, 00:10:43.863 "data_size": 0 00:10:43.863 } 00:10:43.863 ] 00:10:43.863 }' 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.863 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.429 [2024-10-08 16:18:37.586626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.429 [2024-10-08 16:18:37.586706] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.429 [2024-10-08 16:18:37.586727] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:44.429 [2024-10-08 16:18:37.587124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:44.429 [2024-10-08 16:18:37.587362] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.429 [2024-10-08 16:18:37.587379] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.429 [2024-10-08 16:18:37.587747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.429 BaseBdev3 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.429 [ 00:10:44.429 { 00:10:44.429 "name": "BaseBdev3", 00:10:44.429 "aliases": [ 00:10:44.429 "e37f8783-af44-4acf-bf48-fb82cb51296c" 00:10:44.429 ], 00:10:44.429 "product_name": "Malloc disk", 00:10:44.429 "block_size": 512, 00:10:44.429 "num_blocks": 65536, 00:10:44.429 "uuid": "e37f8783-af44-4acf-bf48-fb82cb51296c", 00:10:44.429 "assigned_rate_limits": { 00:10:44.429 "rw_ios_per_sec": 0, 00:10:44.429 "rw_mbytes_per_sec": 0, 00:10:44.429 "r_mbytes_per_sec": 0, 00:10:44.429 "w_mbytes_per_sec": 0 00:10:44.429 }, 00:10:44.429 "claimed": true, 00:10:44.429 "claim_type": "exclusive_write", 00:10:44.429 "zoned": false, 00:10:44.429 "supported_io_types": { 00:10:44.429 "read": true, 00:10:44.429 "write": true, 00:10:44.429 "unmap": true, 00:10:44.429 "flush": true, 00:10:44.429 "reset": true, 00:10:44.429 "nvme_admin": false, 00:10:44.429 "nvme_io": false, 00:10:44.429 "nvme_io_md": false, 00:10:44.429 "write_zeroes": true, 00:10:44.429 "zcopy": true, 00:10:44.429 "get_zone_info": false, 00:10:44.429 "zone_management": false, 00:10:44.429 "zone_append": false, 00:10:44.429 "compare": false, 00:10:44.429 "compare_and_write": false, 00:10:44.429 "abort": true, 00:10:44.429 "seek_hole": false, 00:10:44.429 "seek_data": false, 00:10:44.429 "copy": true, 00:10:44.429 "nvme_iov_md": false 00:10:44.429 }, 00:10:44.429 "memory_domains": [ 00:10:44.429 { 00:10:44.429 "dma_device_id": "system", 00:10:44.429 "dma_device_type": 1 00:10:44.429 }, 00:10:44.429 { 00:10:44.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.429 "dma_device_type": 2 00:10:44.429 } 00:10:44.429 ], 00:10:44.429 "driver_specific": {} 00:10:44.429 } 00:10:44.429 ] 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.429 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.430 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.430 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.430 "name": "Existed_Raid", 00:10:44.430 "uuid": "166cab1d-8041-4800-bcf0-37c720b31476", 00:10:44.430 "strip_size_kb": 64, 00:10:44.430 "state": "online", 00:10:44.430 "raid_level": "concat", 00:10:44.430 "superblock": false, 00:10:44.430 "num_base_bdevs": 3, 00:10:44.430 "num_base_bdevs_discovered": 3, 00:10:44.430 "num_base_bdevs_operational": 3, 00:10:44.430 "base_bdevs_list": [ 00:10:44.430 { 00:10:44.430 "name": "BaseBdev1", 00:10:44.430 "uuid": "24afdf72-906b-4cec-988a-22fb652b1f4f", 00:10:44.430 "is_configured": true, 00:10:44.430 "data_offset": 0, 00:10:44.430 "data_size": 65536 00:10:44.430 }, 00:10:44.430 { 00:10:44.430 "name": "BaseBdev2", 00:10:44.430 "uuid": "541ecaaf-0ef6-4b29-ae2d-09eb7346b6ed", 00:10:44.430 "is_configured": true, 00:10:44.430 "data_offset": 0, 00:10:44.430 "data_size": 65536 00:10:44.430 }, 00:10:44.430 { 00:10:44.430 "name": "BaseBdev3", 00:10:44.430 "uuid": "e37f8783-af44-4acf-bf48-fb82cb51296c", 00:10:44.430 "is_configured": true, 00:10:44.430 "data_offset": 0, 00:10:44.430 "data_size": 65536 00:10:44.430 } 00:10:44.430 ] 00:10:44.430 }' 00:10:44.430 16:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.430 16:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.000 [2024-10-08 16:18:38.143344] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.000 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.000 "name": "Existed_Raid", 00:10:45.000 "aliases": [ 00:10:45.000 "166cab1d-8041-4800-bcf0-37c720b31476" 00:10:45.000 ], 00:10:45.000 "product_name": "Raid Volume", 00:10:45.000 "block_size": 512, 00:10:45.000 "num_blocks": 196608, 00:10:45.000 "uuid": "166cab1d-8041-4800-bcf0-37c720b31476", 00:10:45.000 "assigned_rate_limits": { 00:10:45.000 "rw_ios_per_sec": 0, 00:10:45.000 "rw_mbytes_per_sec": 0, 00:10:45.000 "r_mbytes_per_sec": 0, 00:10:45.000 "w_mbytes_per_sec": 0 00:10:45.000 }, 00:10:45.000 "claimed": false, 00:10:45.000 "zoned": false, 00:10:45.000 "supported_io_types": { 00:10:45.000 "read": true, 00:10:45.000 "write": true, 00:10:45.000 "unmap": true, 00:10:45.000 "flush": true, 00:10:45.000 "reset": true, 00:10:45.000 "nvme_admin": false, 00:10:45.000 "nvme_io": false, 00:10:45.000 "nvme_io_md": false, 00:10:45.000 "write_zeroes": true, 00:10:45.000 "zcopy": false, 00:10:45.000 "get_zone_info": false, 00:10:45.000 "zone_management": false, 00:10:45.000 "zone_append": false, 00:10:45.000 "compare": false, 00:10:45.000 "compare_and_write": false, 00:10:45.000 "abort": false, 00:10:45.000 "seek_hole": false, 00:10:45.000 "seek_data": false, 00:10:45.000 "copy": false, 00:10:45.000 "nvme_iov_md": false 00:10:45.000 }, 00:10:45.000 "memory_domains": [ 00:10:45.000 { 00:10:45.000 "dma_device_id": "system", 00:10:45.001 "dma_device_type": 1 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.001 "dma_device_type": 2 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "dma_device_id": "system", 00:10:45.001 "dma_device_type": 1 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.001 "dma_device_type": 2 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "dma_device_id": "system", 00:10:45.001 "dma_device_type": 1 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.001 "dma_device_type": 2 00:10:45.001 } 00:10:45.001 ], 00:10:45.001 "driver_specific": { 00:10:45.001 "raid": { 00:10:45.001 "uuid": "166cab1d-8041-4800-bcf0-37c720b31476", 00:10:45.001 "strip_size_kb": 64, 00:10:45.001 "state": "online", 00:10:45.001 "raid_level": "concat", 00:10:45.001 "superblock": false, 00:10:45.001 "num_base_bdevs": 3, 00:10:45.001 "num_base_bdevs_discovered": 3, 00:10:45.001 "num_base_bdevs_operational": 3, 00:10:45.001 "base_bdevs_list": [ 00:10:45.001 { 00:10:45.001 "name": "BaseBdev1", 00:10:45.001 "uuid": "24afdf72-906b-4cec-988a-22fb652b1f4f", 00:10:45.001 "is_configured": true, 00:10:45.001 "data_offset": 0, 00:10:45.001 "data_size": 65536 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "name": "BaseBdev2", 00:10:45.001 "uuid": "541ecaaf-0ef6-4b29-ae2d-09eb7346b6ed", 00:10:45.001 "is_configured": true, 00:10:45.001 "data_offset": 0, 00:10:45.001 "data_size": 65536 00:10:45.001 }, 00:10:45.001 { 00:10:45.001 "name": "BaseBdev3", 00:10:45.001 "uuid": "e37f8783-af44-4acf-bf48-fb82cb51296c", 00:10:45.001 "is_configured": true, 00:10:45.001 "data_offset": 0, 00:10:45.001 "data_size": 65536 00:10:45.001 } 00:10:45.001 ] 00:10:45.001 } 00:10:45.001 } 00:10:45.001 }' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.001 BaseBdev2 00:10:45.001 BaseBdev3' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.001 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.259 [2024-10-08 16:18:38.438996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.259 [2024-10-08 16:18:38.439044] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.259 [2024-10-08 16:18:38.439136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.259 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.517 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.517 "name": "Existed_Raid", 00:10:45.517 "uuid": "166cab1d-8041-4800-bcf0-37c720b31476", 00:10:45.517 "strip_size_kb": 64, 00:10:45.517 "state": "offline", 00:10:45.517 "raid_level": "concat", 00:10:45.517 "superblock": false, 00:10:45.517 "num_base_bdevs": 3, 00:10:45.517 "num_base_bdevs_discovered": 2, 00:10:45.517 "num_base_bdevs_operational": 2, 00:10:45.517 "base_bdevs_list": [ 00:10:45.517 { 00:10:45.517 "name": null, 00:10:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.517 "is_configured": false, 00:10:45.517 "data_offset": 0, 00:10:45.517 "data_size": 65536 00:10:45.517 }, 00:10:45.517 { 00:10:45.517 "name": "BaseBdev2", 00:10:45.517 "uuid": "541ecaaf-0ef6-4b29-ae2d-09eb7346b6ed", 00:10:45.517 "is_configured": true, 00:10:45.517 "data_offset": 0, 00:10:45.517 "data_size": 65536 00:10:45.517 }, 00:10:45.517 { 00:10:45.517 "name": "BaseBdev3", 00:10:45.517 "uuid": "e37f8783-af44-4acf-bf48-fb82cb51296c", 00:10:45.517 "is_configured": true, 00:10:45.517 "data_offset": 0, 00:10:45.517 "data_size": 65536 00:10:45.517 } 00:10:45.517 ] 00:10:45.517 }' 00:10:45.517 16:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.517 16:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.775 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.034 [2024-10-08 16:18:39.138064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.034 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.035 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.035 [2024-10-08 16:18:39.282672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.035 [2024-10-08 16:18:39.282757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.294 BaseBdev2 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.294 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.294 [ 00:10:46.294 { 00:10:46.294 "name": "BaseBdev2", 00:10:46.294 "aliases": [ 00:10:46.294 "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2" 00:10:46.294 ], 00:10:46.294 "product_name": "Malloc disk", 00:10:46.294 "block_size": 512, 00:10:46.294 "num_blocks": 65536, 00:10:46.294 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:46.294 "assigned_rate_limits": { 00:10:46.294 "rw_ios_per_sec": 0, 00:10:46.294 "rw_mbytes_per_sec": 0, 00:10:46.294 "r_mbytes_per_sec": 0, 00:10:46.294 "w_mbytes_per_sec": 0 00:10:46.294 }, 00:10:46.294 "claimed": false, 00:10:46.294 "zoned": false, 00:10:46.294 "supported_io_types": { 00:10:46.294 "read": true, 00:10:46.294 "write": true, 00:10:46.294 "unmap": true, 00:10:46.294 "flush": true, 00:10:46.294 "reset": true, 00:10:46.294 "nvme_admin": false, 00:10:46.294 "nvme_io": false, 00:10:46.294 "nvme_io_md": false, 00:10:46.294 "write_zeroes": true, 00:10:46.294 "zcopy": true, 00:10:46.294 "get_zone_info": false, 00:10:46.294 "zone_management": false, 00:10:46.294 "zone_append": false, 00:10:46.294 "compare": false, 00:10:46.294 "compare_and_write": false, 00:10:46.295 "abort": true, 00:10:46.295 "seek_hole": false, 00:10:46.295 "seek_data": false, 00:10:46.295 "copy": true, 00:10:46.295 "nvme_iov_md": false 00:10:46.295 }, 00:10:46.295 "memory_domains": [ 00:10:46.295 { 00:10:46.295 "dma_device_id": "system", 00:10:46.295 "dma_device_type": 1 00:10:46.295 }, 00:10:46.295 { 00:10:46.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.295 "dma_device_type": 2 00:10:46.295 } 00:10:46.295 ], 00:10:46.295 "driver_specific": {} 00:10:46.295 } 00:10:46.295 ] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.295 BaseBdev3 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.295 [ 00:10:46.295 { 00:10:46.295 "name": "BaseBdev3", 00:10:46.295 "aliases": [ 00:10:46.295 "edd5f85d-18f8-484b-b6eb-29df9fc71b3b" 00:10:46.295 ], 00:10:46.295 "product_name": "Malloc disk", 00:10:46.295 "block_size": 512, 00:10:46.295 "num_blocks": 65536, 00:10:46.295 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:46.295 "assigned_rate_limits": { 00:10:46.295 "rw_ios_per_sec": 0, 00:10:46.295 "rw_mbytes_per_sec": 0, 00:10:46.295 "r_mbytes_per_sec": 0, 00:10:46.295 "w_mbytes_per_sec": 0 00:10:46.295 }, 00:10:46.295 "claimed": false, 00:10:46.295 "zoned": false, 00:10:46.295 "supported_io_types": { 00:10:46.295 "read": true, 00:10:46.295 "write": true, 00:10:46.295 "unmap": true, 00:10:46.295 "flush": true, 00:10:46.295 "reset": true, 00:10:46.295 "nvme_admin": false, 00:10:46.295 "nvme_io": false, 00:10:46.295 "nvme_io_md": false, 00:10:46.295 "write_zeroes": true, 00:10:46.295 "zcopy": true, 00:10:46.295 "get_zone_info": false, 00:10:46.295 "zone_management": false, 00:10:46.295 "zone_append": false, 00:10:46.295 "compare": false, 00:10:46.295 "compare_and_write": false, 00:10:46.295 "abort": true, 00:10:46.295 "seek_hole": false, 00:10:46.295 "seek_data": false, 00:10:46.295 "copy": true, 00:10:46.295 "nvme_iov_md": false 00:10:46.295 }, 00:10:46.295 "memory_domains": [ 00:10:46.295 { 00:10:46.295 "dma_device_id": "system", 00:10:46.295 "dma_device_type": 1 00:10:46.295 }, 00:10:46.295 { 00:10:46.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.295 "dma_device_type": 2 00:10:46.295 } 00:10:46.295 ], 00:10:46.295 "driver_specific": {} 00:10:46.295 } 00:10:46.295 ] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.295 [2024-10-08 16:18:39.587208] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.295 [2024-10-08 16:18:39.587280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.295 [2024-10-08 16:18:39.587323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.295 [2024-10-08 16:18:39.590161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.295 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.554 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.554 "name": "Existed_Raid", 00:10:46.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.554 "strip_size_kb": 64, 00:10:46.554 "state": "configuring", 00:10:46.554 "raid_level": "concat", 00:10:46.554 "superblock": false, 00:10:46.554 "num_base_bdevs": 3, 00:10:46.554 "num_base_bdevs_discovered": 2, 00:10:46.554 "num_base_bdevs_operational": 3, 00:10:46.554 "base_bdevs_list": [ 00:10:46.554 { 00:10:46.554 "name": "BaseBdev1", 00:10:46.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.554 "is_configured": false, 00:10:46.554 "data_offset": 0, 00:10:46.554 "data_size": 0 00:10:46.554 }, 00:10:46.554 { 00:10:46.554 "name": "BaseBdev2", 00:10:46.554 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:46.554 "is_configured": true, 00:10:46.554 "data_offset": 0, 00:10:46.554 "data_size": 65536 00:10:46.554 }, 00:10:46.554 { 00:10:46.554 "name": "BaseBdev3", 00:10:46.554 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:46.554 "is_configured": true, 00:10:46.554 "data_offset": 0, 00:10:46.554 "data_size": 65536 00:10:46.554 } 00:10:46.554 ] 00:10:46.554 }' 00:10:46.554 16:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.554 16:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.811 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.811 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.812 [2024-10-08 16:18:40.075253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.812 "name": "Existed_Raid", 00:10:46.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.812 "strip_size_kb": 64, 00:10:46.812 "state": "configuring", 00:10:46.812 "raid_level": "concat", 00:10:46.812 "superblock": false, 00:10:46.812 "num_base_bdevs": 3, 00:10:46.812 "num_base_bdevs_discovered": 1, 00:10:46.812 "num_base_bdevs_operational": 3, 00:10:46.812 "base_bdevs_list": [ 00:10:46.812 { 00:10:46.812 "name": "BaseBdev1", 00:10:46.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.812 "is_configured": false, 00:10:46.812 "data_offset": 0, 00:10:46.812 "data_size": 0 00:10:46.812 }, 00:10:46.812 { 00:10:46.812 "name": null, 00:10:46.812 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:46.812 "is_configured": false, 00:10:46.812 "data_offset": 0, 00:10:46.812 "data_size": 65536 00:10:46.812 }, 00:10:46.812 { 00:10:46.812 "name": "BaseBdev3", 00:10:46.812 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:46.812 "is_configured": true, 00:10:46.812 "data_offset": 0, 00:10:46.812 "data_size": 65536 00:10:46.812 } 00:10:46.812 ] 00:10:46.812 }' 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.812 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.383 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.641 [2024-10-08 16:18:40.717828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.641 BaseBdev1 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.641 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.641 [ 00:10:47.641 { 00:10:47.641 "name": "BaseBdev1", 00:10:47.641 "aliases": [ 00:10:47.641 "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec" 00:10:47.641 ], 00:10:47.641 "product_name": "Malloc disk", 00:10:47.641 "block_size": 512, 00:10:47.641 "num_blocks": 65536, 00:10:47.641 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:47.641 "assigned_rate_limits": { 00:10:47.641 "rw_ios_per_sec": 0, 00:10:47.641 "rw_mbytes_per_sec": 0, 00:10:47.641 "r_mbytes_per_sec": 0, 00:10:47.641 "w_mbytes_per_sec": 0 00:10:47.641 }, 00:10:47.641 "claimed": true, 00:10:47.641 "claim_type": "exclusive_write", 00:10:47.641 "zoned": false, 00:10:47.641 "supported_io_types": { 00:10:47.641 "read": true, 00:10:47.641 "write": true, 00:10:47.641 "unmap": true, 00:10:47.641 "flush": true, 00:10:47.641 "reset": true, 00:10:47.641 "nvme_admin": false, 00:10:47.641 "nvme_io": false, 00:10:47.641 "nvme_io_md": false, 00:10:47.641 "write_zeroes": true, 00:10:47.642 "zcopy": true, 00:10:47.642 "get_zone_info": false, 00:10:47.642 "zone_management": false, 00:10:47.642 "zone_append": false, 00:10:47.642 "compare": false, 00:10:47.642 "compare_and_write": false, 00:10:47.642 "abort": true, 00:10:47.642 "seek_hole": false, 00:10:47.642 "seek_data": false, 00:10:47.642 "copy": true, 00:10:47.642 "nvme_iov_md": false 00:10:47.642 }, 00:10:47.642 "memory_domains": [ 00:10:47.642 { 00:10:47.642 "dma_device_id": "system", 00:10:47.642 "dma_device_type": 1 00:10:47.642 }, 00:10:47.642 { 00:10:47.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.642 "dma_device_type": 2 00:10:47.642 } 00:10:47.642 ], 00:10:47.642 "driver_specific": {} 00:10:47.642 } 00:10:47.642 ] 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.642 "name": "Existed_Raid", 00:10:47.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.642 "strip_size_kb": 64, 00:10:47.642 "state": "configuring", 00:10:47.642 "raid_level": "concat", 00:10:47.642 "superblock": false, 00:10:47.642 "num_base_bdevs": 3, 00:10:47.642 "num_base_bdevs_discovered": 2, 00:10:47.642 "num_base_bdevs_operational": 3, 00:10:47.642 "base_bdevs_list": [ 00:10:47.642 { 00:10:47.642 "name": "BaseBdev1", 00:10:47.642 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:47.642 "is_configured": true, 00:10:47.642 "data_offset": 0, 00:10:47.642 "data_size": 65536 00:10:47.642 }, 00:10:47.642 { 00:10:47.642 "name": null, 00:10:47.642 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:47.642 "is_configured": false, 00:10:47.642 "data_offset": 0, 00:10:47.642 "data_size": 65536 00:10:47.642 }, 00:10:47.642 { 00:10:47.642 "name": "BaseBdev3", 00:10:47.642 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:47.642 "is_configured": true, 00:10:47.642 "data_offset": 0, 00:10:47.642 "data_size": 65536 00:10:47.642 } 00:10:47.642 ] 00:10:47.642 }' 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.642 16:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.208 [2024-10-08 16:18:41.290062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.208 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.208 "name": "Existed_Raid", 00:10:48.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.208 "strip_size_kb": 64, 00:10:48.208 "state": "configuring", 00:10:48.208 "raid_level": "concat", 00:10:48.208 "superblock": false, 00:10:48.208 "num_base_bdevs": 3, 00:10:48.208 "num_base_bdevs_discovered": 1, 00:10:48.208 "num_base_bdevs_operational": 3, 00:10:48.208 "base_bdevs_list": [ 00:10:48.208 { 00:10:48.208 "name": "BaseBdev1", 00:10:48.208 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:48.208 "is_configured": true, 00:10:48.208 "data_offset": 0, 00:10:48.208 "data_size": 65536 00:10:48.208 }, 00:10:48.208 { 00:10:48.208 "name": null, 00:10:48.208 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:48.208 "is_configured": false, 00:10:48.208 "data_offset": 0, 00:10:48.208 "data_size": 65536 00:10:48.208 }, 00:10:48.208 { 00:10:48.208 "name": null, 00:10:48.208 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:48.208 "is_configured": false, 00:10:48.208 "data_offset": 0, 00:10:48.208 "data_size": 65536 00:10:48.208 } 00:10:48.208 ] 00:10:48.208 }' 00:10:48.209 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.209 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 [2024-10-08 16:18:41.850218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.775 "name": "Existed_Raid", 00:10:48.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.775 "strip_size_kb": 64, 00:10:48.775 "state": "configuring", 00:10:48.775 "raid_level": "concat", 00:10:48.775 "superblock": false, 00:10:48.775 "num_base_bdevs": 3, 00:10:48.775 "num_base_bdevs_discovered": 2, 00:10:48.775 "num_base_bdevs_operational": 3, 00:10:48.775 "base_bdevs_list": [ 00:10:48.775 { 00:10:48.775 "name": "BaseBdev1", 00:10:48.775 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:48.775 "is_configured": true, 00:10:48.775 "data_offset": 0, 00:10:48.775 "data_size": 65536 00:10:48.775 }, 00:10:48.775 { 00:10:48.775 "name": null, 00:10:48.775 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:48.775 "is_configured": false, 00:10:48.775 "data_offset": 0, 00:10:48.775 "data_size": 65536 00:10:48.775 }, 00:10:48.775 { 00:10:48.775 "name": "BaseBdev3", 00:10:48.775 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:48.775 "is_configured": true, 00:10:48.775 "data_offset": 0, 00:10:48.775 "data_size": 65536 00:10:48.775 } 00:10:48.775 ] 00:10:48.775 }' 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.775 16:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.344 [2024-10-08 16:18:42.454394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.344 "name": "Existed_Raid", 00:10:49.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.344 "strip_size_kb": 64, 00:10:49.344 "state": "configuring", 00:10:49.344 "raid_level": "concat", 00:10:49.344 "superblock": false, 00:10:49.344 "num_base_bdevs": 3, 00:10:49.344 "num_base_bdevs_discovered": 1, 00:10:49.344 "num_base_bdevs_operational": 3, 00:10:49.344 "base_bdevs_list": [ 00:10:49.344 { 00:10:49.344 "name": null, 00:10:49.344 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:49.344 "is_configured": false, 00:10:49.344 "data_offset": 0, 00:10:49.344 "data_size": 65536 00:10:49.344 }, 00:10:49.344 { 00:10:49.344 "name": null, 00:10:49.344 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:49.344 "is_configured": false, 00:10:49.344 "data_offset": 0, 00:10:49.344 "data_size": 65536 00:10:49.344 }, 00:10:49.344 { 00:10:49.344 "name": "BaseBdev3", 00:10:49.344 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:49.344 "is_configured": true, 00:10:49.344 "data_offset": 0, 00:10:49.344 "data_size": 65536 00:10:49.344 } 00:10:49.344 ] 00:10:49.344 }' 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.344 16:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 [2024-10-08 16:18:43.089122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.910 "name": "Existed_Raid", 00:10:49.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.910 "strip_size_kb": 64, 00:10:49.910 "state": "configuring", 00:10:49.910 "raid_level": "concat", 00:10:49.910 "superblock": false, 00:10:49.910 "num_base_bdevs": 3, 00:10:49.910 "num_base_bdevs_discovered": 2, 00:10:49.910 "num_base_bdevs_operational": 3, 00:10:49.910 "base_bdevs_list": [ 00:10:49.910 { 00:10:49.910 "name": null, 00:10:49.910 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:49.910 "is_configured": false, 00:10:49.910 "data_offset": 0, 00:10:49.910 "data_size": 65536 00:10:49.910 }, 00:10:49.910 { 00:10:49.910 "name": "BaseBdev2", 00:10:49.910 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:49.910 "is_configured": true, 00:10:49.910 "data_offset": 0, 00:10:49.910 "data_size": 65536 00:10:49.910 }, 00:10:49.910 { 00:10:49.910 "name": "BaseBdev3", 00:10:49.910 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:49.910 "is_configured": true, 00:10:49.910 "data_offset": 0, 00:10:49.910 "data_size": 65536 00:10:49.910 } 00:10:49.910 ] 00:10:49.910 }' 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.910 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 [2024-10-08 16:18:43.734865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.477 [2024-10-08 16:18:43.734931] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.477 [2024-10-08 16:18:43.734951] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:50.477 [2024-10-08 16:18:43.735348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:50.477 [2024-10-08 16:18:43.735591] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.477 [2024-10-08 16:18:43.735609] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:50.477 [2024-10-08 16:18:43.735964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.477 NewBaseBdev 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.477 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 [ 00:10:50.477 { 00:10:50.477 "name": "NewBaseBdev", 00:10:50.477 "aliases": [ 00:10:50.477 "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec" 00:10:50.477 ], 00:10:50.477 "product_name": "Malloc disk", 00:10:50.477 "block_size": 512, 00:10:50.477 "num_blocks": 65536, 00:10:50.477 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:50.477 "assigned_rate_limits": { 00:10:50.477 "rw_ios_per_sec": 0, 00:10:50.477 "rw_mbytes_per_sec": 0, 00:10:50.477 "r_mbytes_per_sec": 0, 00:10:50.477 "w_mbytes_per_sec": 0 00:10:50.477 }, 00:10:50.477 "claimed": true, 00:10:50.477 "claim_type": "exclusive_write", 00:10:50.477 "zoned": false, 00:10:50.477 "supported_io_types": { 00:10:50.477 "read": true, 00:10:50.477 "write": true, 00:10:50.477 "unmap": true, 00:10:50.477 "flush": true, 00:10:50.477 "reset": true, 00:10:50.477 "nvme_admin": false, 00:10:50.477 "nvme_io": false, 00:10:50.477 "nvme_io_md": false, 00:10:50.477 "write_zeroes": true, 00:10:50.477 "zcopy": true, 00:10:50.477 "get_zone_info": false, 00:10:50.477 "zone_management": false, 00:10:50.477 "zone_append": false, 00:10:50.477 "compare": false, 00:10:50.477 "compare_and_write": false, 00:10:50.477 "abort": true, 00:10:50.477 "seek_hole": false, 00:10:50.477 "seek_data": false, 00:10:50.477 "copy": true, 00:10:50.477 "nvme_iov_md": false 00:10:50.477 }, 00:10:50.477 "memory_domains": [ 00:10:50.477 { 00:10:50.477 "dma_device_id": "system", 00:10:50.477 "dma_device_type": 1 00:10:50.477 }, 00:10:50.477 { 00:10:50.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.477 "dma_device_type": 2 00:10:50.477 } 00:10:50.477 ], 00:10:50.478 "driver_specific": {} 00:10:50.478 } 00:10:50.478 ] 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.478 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.736 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.736 "name": "Existed_Raid", 00:10:50.736 "uuid": "1b6d2d8f-0c01-479d-a3a7-9f23438337aa", 00:10:50.736 "strip_size_kb": 64, 00:10:50.736 "state": "online", 00:10:50.736 "raid_level": "concat", 00:10:50.736 "superblock": false, 00:10:50.736 "num_base_bdevs": 3, 00:10:50.736 "num_base_bdevs_discovered": 3, 00:10:50.736 "num_base_bdevs_operational": 3, 00:10:50.736 "base_bdevs_list": [ 00:10:50.736 { 00:10:50.736 "name": "NewBaseBdev", 00:10:50.736 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:50.736 "is_configured": true, 00:10:50.736 "data_offset": 0, 00:10:50.736 "data_size": 65536 00:10:50.736 }, 00:10:50.736 { 00:10:50.736 "name": "BaseBdev2", 00:10:50.736 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:50.736 "is_configured": true, 00:10:50.736 "data_offset": 0, 00:10:50.736 "data_size": 65536 00:10:50.736 }, 00:10:50.736 { 00:10:50.736 "name": "BaseBdev3", 00:10:50.736 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:50.736 "is_configured": true, 00:10:50.736 "data_offset": 0, 00:10:50.736 "data_size": 65536 00:10:50.736 } 00:10:50.736 ] 00:10:50.736 }' 00:10:50.736 16:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.736 16:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.059 [2024-10-08 16:18:44.307471] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.059 "name": "Existed_Raid", 00:10:51.059 "aliases": [ 00:10:51.059 "1b6d2d8f-0c01-479d-a3a7-9f23438337aa" 00:10:51.059 ], 00:10:51.059 "product_name": "Raid Volume", 00:10:51.059 "block_size": 512, 00:10:51.059 "num_blocks": 196608, 00:10:51.059 "uuid": "1b6d2d8f-0c01-479d-a3a7-9f23438337aa", 00:10:51.059 "assigned_rate_limits": { 00:10:51.059 "rw_ios_per_sec": 0, 00:10:51.059 "rw_mbytes_per_sec": 0, 00:10:51.059 "r_mbytes_per_sec": 0, 00:10:51.059 "w_mbytes_per_sec": 0 00:10:51.059 }, 00:10:51.059 "claimed": false, 00:10:51.059 "zoned": false, 00:10:51.059 "supported_io_types": { 00:10:51.059 "read": true, 00:10:51.059 "write": true, 00:10:51.059 "unmap": true, 00:10:51.059 "flush": true, 00:10:51.059 "reset": true, 00:10:51.059 "nvme_admin": false, 00:10:51.059 "nvme_io": false, 00:10:51.059 "nvme_io_md": false, 00:10:51.059 "write_zeroes": true, 00:10:51.059 "zcopy": false, 00:10:51.059 "get_zone_info": false, 00:10:51.059 "zone_management": false, 00:10:51.059 "zone_append": false, 00:10:51.059 "compare": false, 00:10:51.059 "compare_and_write": false, 00:10:51.059 "abort": false, 00:10:51.059 "seek_hole": false, 00:10:51.059 "seek_data": false, 00:10:51.059 "copy": false, 00:10:51.059 "nvme_iov_md": false 00:10:51.059 }, 00:10:51.059 "memory_domains": [ 00:10:51.059 { 00:10:51.059 "dma_device_id": "system", 00:10:51.059 "dma_device_type": 1 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.059 "dma_device_type": 2 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "dma_device_id": "system", 00:10:51.059 "dma_device_type": 1 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.059 "dma_device_type": 2 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "dma_device_id": "system", 00:10:51.059 "dma_device_type": 1 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.059 "dma_device_type": 2 00:10:51.059 } 00:10:51.059 ], 00:10:51.059 "driver_specific": { 00:10:51.059 "raid": { 00:10:51.059 "uuid": "1b6d2d8f-0c01-479d-a3a7-9f23438337aa", 00:10:51.059 "strip_size_kb": 64, 00:10:51.059 "state": "online", 00:10:51.059 "raid_level": "concat", 00:10:51.059 "superblock": false, 00:10:51.059 "num_base_bdevs": 3, 00:10:51.059 "num_base_bdevs_discovered": 3, 00:10:51.059 "num_base_bdevs_operational": 3, 00:10:51.059 "base_bdevs_list": [ 00:10:51.059 { 00:10:51.059 "name": "NewBaseBdev", 00:10:51.059 "uuid": "4c6f75eb-dd0a-4fd7-bd07-4cdfb12d4cec", 00:10:51.059 "is_configured": true, 00:10:51.059 "data_offset": 0, 00:10:51.059 "data_size": 65536 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "name": "BaseBdev2", 00:10:51.059 "uuid": "7f874b1c-8b4e-4c15-956f-44d2ea18c3a2", 00:10:51.059 "is_configured": true, 00:10:51.059 "data_offset": 0, 00:10:51.059 "data_size": 65536 00:10:51.059 }, 00:10:51.059 { 00:10:51.059 "name": "BaseBdev3", 00:10:51.059 "uuid": "edd5f85d-18f8-484b-b6eb-29df9fc71b3b", 00:10:51.059 "is_configured": true, 00:10:51.059 "data_offset": 0, 00:10:51.059 "data_size": 65536 00:10:51.059 } 00:10:51.059 ] 00:10:51.059 } 00:10:51.059 } 00:10:51.059 }' 00:10:51.059 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:51.317 BaseBdev2 00:10:51.317 BaseBdev3' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.317 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.318 [2024-10-08 16:18:44.607172] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.318 [2024-10-08 16:18:44.607226] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.318 [2024-10-08 16:18:44.607330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.318 [2024-10-08 16:18:44.607420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.318 [2024-10-08 16:18:44.607441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65935 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65935 ']' 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65935 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.318 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65935 00:10:51.576 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.576 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.576 killing process with pid 65935 00:10:51.576 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65935' 00:10:51.576 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65935 00:10:51.576 [2024-10-08 16:18:44.648047] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.576 16:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65935 00:10:51.833 [2024-10-08 16:18:44.930750] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.228 00:10:53.228 real 0m11.940s 00:10:53.228 user 0m19.540s 00:10:53.228 sys 0m1.739s 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.228 ************************************ 00:10:53.228 END TEST raid_state_function_test 00:10:53.228 ************************************ 00:10:53.228 16:18:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:53.228 16:18:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.228 16:18:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.228 16:18:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.228 ************************************ 00:10:53.228 START TEST raid_state_function_test_sb 00:10:53.228 ************************************ 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.228 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66573 00:10:53.229 Process raid pid: 66573 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66573' 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66573 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66573 ']' 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.229 16:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.229 [2024-10-08 16:18:46.346606] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:53.229 [2024-10-08 16:18:46.346817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.487 [2024-10-08 16:18:46.534362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.487 [2024-10-08 16:18:46.785815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.745 [2024-10-08 16:18:47.001762] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.745 [2024-10-08 16:18:47.001824] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.311 [2024-10-08 16:18:47.401678] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.311 [2024-10-08 16:18:47.401764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.311 [2024-10-08 16:18:47.401781] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.311 [2024-10-08 16:18:47.401801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.311 [2024-10-08 16:18:47.401812] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.311 [2024-10-08 16:18:47.401827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.311 "name": "Existed_Raid", 00:10:54.311 "uuid": "195e08dc-cb0f-42a1-9ea5-8e750ca9b915", 00:10:54.311 "strip_size_kb": 64, 00:10:54.311 "state": "configuring", 00:10:54.311 "raid_level": "concat", 00:10:54.311 "superblock": true, 00:10:54.311 "num_base_bdevs": 3, 00:10:54.311 "num_base_bdevs_discovered": 0, 00:10:54.311 "num_base_bdevs_operational": 3, 00:10:54.311 "base_bdevs_list": [ 00:10:54.311 { 00:10:54.311 "name": "BaseBdev1", 00:10:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.311 "is_configured": false, 00:10:54.311 "data_offset": 0, 00:10:54.311 "data_size": 0 00:10:54.311 }, 00:10:54.311 { 00:10:54.311 "name": "BaseBdev2", 00:10:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.311 "is_configured": false, 00:10:54.311 "data_offset": 0, 00:10:54.311 "data_size": 0 00:10:54.311 }, 00:10:54.311 { 00:10:54.311 "name": "BaseBdev3", 00:10:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.311 "is_configured": false, 00:10:54.311 "data_offset": 0, 00:10:54.311 "data_size": 0 00:10:54.311 } 00:10:54.311 ] 00:10:54.311 }' 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.311 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.878 [2024-10-08 16:18:47.921683] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.878 [2024-10-08 16:18:47.921756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.878 [2024-10-08 16:18:47.929682] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.878 [2024-10-08 16:18:47.929749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.878 [2024-10-08 16:18:47.929764] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.878 [2024-10-08 16:18:47.929781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.878 [2024-10-08 16:18:47.929791] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.878 [2024-10-08 16:18:47.929806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.878 [2024-10-08 16:18:47.986186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.878 BaseBdev1 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.878 16:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.878 [ 00:10:54.878 { 00:10:54.878 "name": "BaseBdev1", 00:10:54.878 "aliases": [ 00:10:54.878 "d7ddc465-9e89-4cda-9ec4-17e9c4128f71" 00:10:54.878 ], 00:10:54.878 "product_name": "Malloc disk", 00:10:54.878 "block_size": 512, 00:10:54.878 "num_blocks": 65536, 00:10:54.878 "uuid": "d7ddc465-9e89-4cda-9ec4-17e9c4128f71", 00:10:54.878 "assigned_rate_limits": { 00:10:54.878 "rw_ios_per_sec": 0, 00:10:54.878 "rw_mbytes_per_sec": 0, 00:10:54.878 "r_mbytes_per_sec": 0, 00:10:54.878 "w_mbytes_per_sec": 0 00:10:54.878 }, 00:10:54.878 "claimed": true, 00:10:54.878 "claim_type": "exclusive_write", 00:10:54.878 "zoned": false, 00:10:54.878 "supported_io_types": { 00:10:54.878 "read": true, 00:10:54.878 "write": true, 00:10:54.878 "unmap": true, 00:10:54.878 "flush": true, 00:10:54.878 "reset": true, 00:10:54.878 "nvme_admin": false, 00:10:54.878 "nvme_io": false, 00:10:54.878 "nvme_io_md": false, 00:10:54.878 "write_zeroes": true, 00:10:54.878 "zcopy": true, 00:10:54.878 "get_zone_info": false, 00:10:54.878 "zone_management": false, 00:10:54.878 "zone_append": false, 00:10:54.879 "compare": false, 00:10:54.879 "compare_and_write": false, 00:10:54.879 "abort": true, 00:10:54.879 "seek_hole": false, 00:10:54.879 "seek_data": false, 00:10:54.879 "copy": true, 00:10:54.879 "nvme_iov_md": false 00:10:54.879 }, 00:10:54.879 "memory_domains": [ 00:10:54.879 { 00:10:54.879 "dma_device_id": "system", 00:10:54.879 "dma_device_type": 1 00:10:54.879 }, 00:10:54.879 { 00:10:54.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.879 "dma_device_type": 2 00:10:54.879 } 00:10:54.879 ], 00:10:54.879 "driver_specific": {} 00:10:54.879 } 00:10:54.879 ] 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.879 "name": "Existed_Raid", 00:10:54.879 "uuid": "a7b29ddb-3d40-4beb-8100-618b1fc14c11", 00:10:54.879 "strip_size_kb": 64, 00:10:54.879 "state": "configuring", 00:10:54.879 "raid_level": "concat", 00:10:54.879 "superblock": true, 00:10:54.879 "num_base_bdevs": 3, 00:10:54.879 "num_base_bdevs_discovered": 1, 00:10:54.879 "num_base_bdevs_operational": 3, 00:10:54.879 "base_bdevs_list": [ 00:10:54.879 { 00:10:54.879 "name": "BaseBdev1", 00:10:54.879 "uuid": "d7ddc465-9e89-4cda-9ec4-17e9c4128f71", 00:10:54.879 "is_configured": true, 00:10:54.879 "data_offset": 2048, 00:10:54.879 "data_size": 63488 00:10:54.879 }, 00:10:54.879 { 00:10:54.879 "name": "BaseBdev2", 00:10:54.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.879 "is_configured": false, 00:10:54.879 "data_offset": 0, 00:10:54.879 "data_size": 0 00:10:54.879 }, 00:10:54.879 { 00:10:54.879 "name": "BaseBdev3", 00:10:54.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.879 "is_configured": false, 00:10:54.879 "data_offset": 0, 00:10:54.879 "data_size": 0 00:10:54.879 } 00:10:54.879 ] 00:10:54.879 }' 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.879 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.444 [2024-10-08 16:18:48.522444] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.444 [2024-10-08 16:18:48.522558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.444 [2024-10-08 16:18:48.530464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.444 [2024-10-08 16:18:48.532929] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.444 [2024-10-08 16:18:48.532991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.444 [2024-10-08 16:18:48.533008] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.444 [2024-10-08 16:18:48.533025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.444 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.445 "name": "Existed_Raid", 00:10:55.445 "uuid": "65a9d308-5296-4e9f-bc9c-2397652ccdf0", 00:10:55.445 "strip_size_kb": 64, 00:10:55.445 "state": "configuring", 00:10:55.445 "raid_level": "concat", 00:10:55.445 "superblock": true, 00:10:55.445 "num_base_bdevs": 3, 00:10:55.445 "num_base_bdevs_discovered": 1, 00:10:55.445 "num_base_bdevs_operational": 3, 00:10:55.445 "base_bdevs_list": [ 00:10:55.445 { 00:10:55.445 "name": "BaseBdev1", 00:10:55.445 "uuid": "d7ddc465-9e89-4cda-9ec4-17e9c4128f71", 00:10:55.445 "is_configured": true, 00:10:55.445 "data_offset": 2048, 00:10:55.445 "data_size": 63488 00:10:55.445 }, 00:10:55.445 { 00:10:55.445 "name": "BaseBdev2", 00:10:55.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.445 "is_configured": false, 00:10:55.445 "data_offset": 0, 00:10:55.445 "data_size": 0 00:10:55.445 }, 00:10:55.445 { 00:10:55.445 "name": "BaseBdev3", 00:10:55.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.445 "is_configured": false, 00:10:55.445 "data_offset": 0, 00:10:55.445 "data_size": 0 00:10:55.445 } 00:10:55.445 ] 00:10:55.445 }' 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.445 16:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.010 [2024-10-08 16:18:49.089715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.010 BaseBdev2 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.010 [ 00:10:56.010 { 00:10:56.010 "name": "BaseBdev2", 00:10:56.010 "aliases": [ 00:10:56.010 "a7e9955d-c940-4b5d-a355-7ab435d5e9f6" 00:10:56.010 ], 00:10:56.010 "product_name": "Malloc disk", 00:10:56.010 "block_size": 512, 00:10:56.010 "num_blocks": 65536, 00:10:56.010 "uuid": "a7e9955d-c940-4b5d-a355-7ab435d5e9f6", 00:10:56.010 "assigned_rate_limits": { 00:10:56.010 "rw_ios_per_sec": 0, 00:10:56.010 "rw_mbytes_per_sec": 0, 00:10:56.010 "r_mbytes_per_sec": 0, 00:10:56.010 "w_mbytes_per_sec": 0 00:10:56.010 }, 00:10:56.010 "claimed": true, 00:10:56.010 "claim_type": "exclusive_write", 00:10:56.010 "zoned": false, 00:10:56.010 "supported_io_types": { 00:10:56.010 "read": true, 00:10:56.010 "write": true, 00:10:56.010 "unmap": true, 00:10:56.010 "flush": true, 00:10:56.010 "reset": true, 00:10:56.010 "nvme_admin": false, 00:10:56.010 "nvme_io": false, 00:10:56.010 "nvme_io_md": false, 00:10:56.010 "write_zeroes": true, 00:10:56.010 "zcopy": true, 00:10:56.010 "get_zone_info": false, 00:10:56.010 "zone_management": false, 00:10:56.010 "zone_append": false, 00:10:56.010 "compare": false, 00:10:56.010 "compare_and_write": false, 00:10:56.010 "abort": true, 00:10:56.010 "seek_hole": false, 00:10:56.010 "seek_data": false, 00:10:56.010 "copy": true, 00:10:56.010 "nvme_iov_md": false 00:10:56.010 }, 00:10:56.010 "memory_domains": [ 00:10:56.010 { 00:10:56.010 "dma_device_id": "system", 00:10:56.010 "dma_device_type": 1 00:10:56.010 }, 00:10:56.010 { 00:10:56.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.010 "dma_device_type": 2 00:10:56.010 } 00:10:56.010 ], 00:10:56.010 "driver_specific": {} 00:10:56.010 } 00:10:56.010 ] 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.010 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.010 "name": "Existed_Raid", 00:10:56.010 "uuid": "65a9d308-5296-4e9f-bc9c-2397652ccdf0", 00:10:56.010 "strip_size_kb": 64, 00:10:56.010 "state": "configuring", 00:10:56.010 "raid_level": "concat", 00:10:56.010 "superblock": true, 00:10:56.010 "num_base_bdevs": 3, 00:10:56.010 "num_base_bdevs_discovered": 2, 00:10:56.010 "num_base_bdevs_operational": 3, 00:10:56.010 "base_bdevs_list": [ 00:10:56.010 { 00:10:56.010 "name": "BaseBdev1", 00:10:56.011 "uuid": "d7ddc465-9e89-4cda-9ec4-17e9c4128f71", 00:10:56.011 "is_configured": true, 00:10:56.011 "data_offset": 2048, 00:10:56.011 "data_size": 63488 00:10:56.011 }, 00:10:56.011 { 00:10:56.011 "name": "BaseBdev2", 00:10:56.011 "uuid": "a7e9955d-c940-4b5d-a355-7ab435d5e9f6", 00:10:56.011 "is_configured": true, 00:10:56.011 "data_offset": 2048, 00:10:56.011 "data_size": 63488 00:10:56.011 }, 00:10:56.011 { 00:10:56.011 "name": "BaseBdev3", 00:10:56.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.011 "is_configured": false, 00:10:56.011 "data_offset": 0, 00:10:56.011 "data_size": 0 00:10:56.011 } 00:10:56.011 ] 00:10:56.011 }' 00:10:56.011 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.011 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.575 BaseBdev3 00:10:56.575 [2024-10-08 16:18:49.679055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.575 [2024-10-08 16:18:49.679419] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.575 [2024-10-08 16:18:49.679452] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:56.575 [2024-10-08 16:18:49.679830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:56.575 [2024-10-08 16:18:49.680024] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.575 [2024-10-08 16:18:49.680045] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:56.575 [2024-10-08 16:18:49.680228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.575 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.575 [ 00:10:56.575 { 00:10:56.576 "name": "BaseBdev3", 00:10:56.576 "aliases": [ 00:10:56.576 "6bfaf683-dcf5-40aa-be73-8c1d4c56d145" 00:10:56.576 ], 00:10:56.576 "product_name": "Malloc disk", 00:10:56.576 "block_size": 512, 00:10:56.576 "num_blocks": 65536, 00:10:56.576 "uuid": "6bfaf683-dcf5-40aa-be73-8c1d4c56d145", 00:10:56.576 "assigned_rate_limits": { 00:10:56.576 "rw_ios_per_sec": 0, 00:10:56.576 "rw_mbytes_per_sec": 0, 00:10:56.576 "r_mbytes_per_sec": 0, 00:10:56.576 "w_mbytes_per_sec": 0 00:10:56.576 }, 00:10:56.576 "claimed": true, 00:10:56.576 "claim_type": "exclusive_write", 00:10:56.576 "zoned": false, 00:10:56.576 "supported_io_types": { 00:10:56.576 "read": true, 00:10:56.576 "write": true, 00:10:56.576 "unmap": true, 00:10:56.576 "flush": true, 00:10:56.576 "reset": true, 00:10:56.576 "nvme_admin": false, 00:10:56.576 "nvme_io": false, 00:10:56.576 "nvme_io_md": false, 00:10:56.576 "write_zeroes": true, 00:10:56.576 "zcopy": true, 00:10:56.576 "get_zone_info": false, 00:10:56.576 "zone_management": false, 00:10:56.576 "zone_append": false, 00:10:56.576 "compare": false, 00:10:56.576 "compare_and_write": false, 00:10:56.576 "abort": true, 00:10:56.576 "seek_hole": false, 00:10:56.576 "seek_data": false, 00:10:56.576 "copy": true, 00:10:56.576 "nvme_iov_md": false 00:10:56.576 }, 00:10:56.576 "memory_domains": [ 00:10:56.576 { 00:10:56.576 "dma_device_id": "system", 00:10:56.576 "dma_device_type": 1 00:10:56.576 }, 00:10:56.576 { 00:10:56.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.576 "dma_device_type": 2 00:10:56.576 } 00:10:56.576 ], 00:10:56.576 "driver_specific": {} 00:10:56.576 } 00:10:56.576 ] 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.576 "name": "Existed_Raid", 00:10:56.576 "uuid": "65a9d308-5296-4e9f-bc9c-2397652ccdf0", 00:10:56.576 "strip_size_kb": 64, 00:10:56.576 "state": "online", 00:10:56.576 "raid_level": "concat", 00:10:56.576 "superblock": true, 00:10:56.576 "num_base_bdevs": 3, 00:10:56.576 "num_base_bdevs_discovered": 3, 00:10:56.576 "num_base_bdevs_operational": 3, 00:10:56.576 "base_bdevs_list": [ 00:10:56.576 { 00:10:56.576 "name": "BaseBdev1", 00:10:56.576 "uuid": "d7ddc465-9e89-4cda-9ec4-17e9c4128f71", 00:10:56.576 "is_configured": true, 00:10:56.576 "data_offset": 2048, 00:10:56.576 "data_size": 63488 00:10:56.576 }, 00:10:56.576 { 00:10:56.576 "name": "BaseBdev2", 00:10:56.576 "uuid": "a7e9955d-c940-4b5d-a355-7ab435d5e9f6", 00:10:56.576 "is_configured": true, 00:10:56.576 "data_offset": 2048, 00:10:56.576 "data_size": 63488 00:10:56.576 }, 00:10:56.576 { 00:10:56.576 "name": "BaseBdev3", 00:10:56.576 "uuid": "6bfaf683-dcf5-40aa-be73-8c1d4c56d145", 00:10:56.576 "is_configured": true, 00:10:56.576 "data_offset": 2048, 00:10:56.576 "data_size": 63488 00:10:56.576 } 00:10:56.576 ] 00:10:56.576 }' 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.576 16:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.141 [2024-10-08 16:18:50.227730] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.141 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.141 "name": "Existed_Raid", 00:10:57.141 "aliases": [ 00:10:57.141 "65a9d308-5296-4e9f-bc9c-2397652ccdf0" 00:10:57.141 ], 00:10:57.141 "product_name": "Raid Volume", 00:10:57.141 "block_size": 512, 00:10:57.141 "num_blocks": 190464, 00:10:57.141 "uuid": "65a9d308-5296-4e9f-bc9c-2397652ccdf0", 00:10:57.141 "assigned_rate_limits": { 00:10:57.141 "rw_ios_per_sec": 0, 00:10:57.141 "rw_mbytes_per_sec": 0, 00:10:57.141 "r_mbytes_per_sec": 0, 00:10:57.141 "w_mbytes_per_sec": 0 00:10:57.141 }, 00:10:57.141 "claimed": false, 00:10:57.141 "zoned": false, 00:10:57.141 "supported_io_types": { 00:10:57.141 "read": true, 00:10:57.141 "write": true, 00:10:57.141 "unmap": true, 00:10:57.141 "flush": true, 00:10:57.141 "reset": true, 00:10:57.141 "nvme_admin": false, 00:10:57.141 "nvme_io": false, 00:10:57.141 "nvme_io_md": false, 00:10:57.141 "write_zeroes": true, 00:10:57.141 "zcopy": false, 00:10:57.141 "get_zone_info": false, 00:10:57.141 "zone_management": false, 00:10:57.141 "zone_append": false, 00:10:57.141 "compare": false, 00:10:57.141 "compare_and_write": false, 00:10:57.141 "abort": false, 00:10:57.141 "seek_hole": false, 00:10:57.141 "seek_data": false, 00:10:57.141 "copy": false, 00:10:57.141 "nvme_iov_md": false 00:10:57.141 }, 00:10:57.141 "memory_domains": [ 00:10:57.141 { 00:10:57.141 "dma_device_id": "system", 00:10:57.141 "dma_device_type": 1 00:10:57.141 }, 00:10:57.141 { 00:10:57.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.141 "dma_device_type": 2 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "dma_device_id": "system", 00:10:57.142 "dma_device_type": 1 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.142 "dma_device_type": 2 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "dma_device_id": "system", 00:10:57.142 "dma_device_type": 1 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.142 "dma_device_type": 2 00:10:57.142 } 00:10:57.142 ], 00:10:57.142 "driver_specific": { 00:10:57.142 "raid": { 00:10:57.142 "uuid": "65a9d308-5296-4e9f-bc9c-2397652ccdf0", 00:10:57.142 "strip_size_kb": 64, 00:10:57.142 "state": "online", 00:10:57.142 "raid_level": "concat", 00:10:57.142 "superblock": true, 00:10:57.142 "num_base_bdevs": 3, 00:10:57.142 "num_base_bdevs_discovered": 3, 00:10:57.142 "num_base_bdevs_operational": 3, 00:10:57.142 "base_bdevs_list": [ 00:10:57.142 { 00:10:57.142 "name": "BaseBdev1", 00:10:57.142 "uuid": "d7ddc465-9e89-4cda-9ec4-17e9c4128f71", 00:10:57.142 "is_configured": true, 00:10:57.142 "data_offset": 2048, 00:10:57.142 "data_size": 63488 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "name": "BaseBdev2", 00:10:57.142 "uuid": "a7e9955d-c940-4b5d-a355-7ab435d5e9f6", 00:10:57.142 "is_configured": true, 00:10:57.142 "data_offset": 2048, 00:10:57.142 "data_size": 63488 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "name": "BaseBdev3", 00:10:57.142 "uuid": "6bfaf683-dcf5-40aa-be73-8c1d4c56d145", 00:10:57.142 "is_configured": true, 00:10:57.142 "data_offset": 2048, 00:10:57.142 "data_size": 63488 00:10:57.142 } 00:10:57.142 ] 00:10:57.142 } 00:10:57.142 } 00:10:57.142 }' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:57.142 BaseBdev2 00:10:57.142 BaseBdev3' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.142 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 [2024-10-08 16:18:50.563404] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.400 [2024-10-08 16:18:50.563467] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.400 [2024-10-08 16:18:50.563554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.400 "name": "Existed_Raid", 00:10:57.400 "uuid": "65a9d308-5296-4e9f-bc9c-2397652ccdf0", 00:10:57.400 "strip_size_kb": 64, 00:10:57.400 "state": "offline", 00:10:57.400 "raid_level": "concat", 00:10:57.400 "superblock": true, 00:10:57.400 "num_base_bdevs": 3, 00:10:57.400 "num_base_bdevs_discovered": 2, 00:10:57.400 "num_base_bdevs_operational": 2, 00:10:57.400 "base_bdevs_list": [ 00:10:57.400 { 00:10:57.400 "name": null, 00:10:57.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.400 "is_configured": false, 00:10:57.400 "data_offset": 0, 00:10:57.400 "data_size": 63488 00:10:57.400 }, 00:10:57.400 { 00:10:57.400 "name": "BaseBdev2", 00:10:57.400 "uuid": "a7e9955d-c940-4b5d-a355-7ab435d5e9f6", 00:10:57.400 "is_configured": true, 00:10:57.400 "data_offset": 2048, 00:10:57.400 "data_size": 63488 00:10:57.400 }, 00:10:57.400 { 00:10:57.400 "name": "BaseBdev3", 00:10:57.400 "uuid": "6bfaf683-dcf5-40aa-be73-8c1d4c56d145", 00:10:57.400 "is_configured": true, 00:10:57.400 "data_offset": 2048, 00:10:57.400 "data_size": 63488 00:10:57.400 } 00:10:57.400 ] 00:10:57.400 }' 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.400 16:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.966 [2024-10-08 16:18:51.187688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.966 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.224 [2024-10-08 16:18:51.330499] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.224 [2024-10-08 16:18:51.330607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.224 BaseBdev2 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.224 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.225 [ 00:10:58.225 { 00:10:58.225 "name": "BaseBdev2", 00:10:58.225 "aliases": [ 00:10:58.225 "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5" 00:10:58.225 ], 00:10:58.225 "product_name": "Malloc disk", 00:10:58.225 "block_size": 512, 00:10:58.225 "num_blocks": 65536, 00:10:58.225 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:10:58.225 "assigned_rate_limits": { 00:10:58.225 "rw_ios_per_sec": 0, 00:10:58.225 "rw_mbytes_per_sec": 0, 00:10:58.225 "r_mbytes_per_sec": 0, 00:10:58.225 "w_mbytes_per_sec": 0 00:10:58.225 }, 00:10:58.225 "claimed": false, 00:10:58.225 "zoned": false, 00:10:58.225 "supported_io_types": { 00:10:58.225 "read": true, 00:10:58.225 "write": true, 00:10:58.225 "unmap": true, 00:10:58.225 "flush": true, 00:10:58.225 "reset": true, 00:10:58.225 "nvme_admin": false, 00:10:58.225 "nvme_io": false, 00:10:58.225 "nvme_io_md": false, 00:10:58.225 "write_zeroes": true, 00:10:58.225 "zcopy": true, 00:10:58.225 "get_zone_info": false, 00:10:58.225 "zone_management": false, 00:10:58.225 "zone_append": false, 00:10:58.225 "compare": false, 00:10:58.225 "compare_and_write": false, 00:10:58.225 "abort": true, 00:10:58.225 "seek_hole": false, 00:10:58.225 "seek_data": false, 00:10:58.225 "copy": true, 00:10:58.225 "nvme_iov_md": false 00:10:58.225 }, 00:10:58.225 "memory_domains": [ 00:10:58.225 { 00:10:58.225 "dma_device_id": "system", 00:10:58.225 "dma_device_type": 1 00:10:58.225 }, 00:10:58.225 { 00:10:58.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.225 "dma_device_type": 2 00:10:58.225 } 00:10:58.225 ], 00:10:58.225 "driver_specific": {} 00:10:58.225 } 00:10:58.225 ] 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.225 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.483 BaseBdev3 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.483 [ 00:10:58.483 { 00:10:58.483 "name": "BaseBdev3", 00:10:58.483 "aliases": [ 00:10:58.483 "d9002801-2741-4080-8b1a-95abfb6e0cce" 00:10:58.483 ], 00:10:58.483 "product_name": "Malloc disk", 00:10:58.483 "block_size": 512, 00:10:58.483 "num_blocks": 65536, 00:10:58.483 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:10:58.483 "assigned_rate_limits": { 00:10:58.483 "rw_ios_per_sec": 0, 00:10:58.483 "rw_mbytes_per_sec": 0, 00:10:58.483 "r_mbytes_per_sec": 0, 00:10:58.483 "w_mbytes_per_sec": 0 00:10:58.483 }, 00:10:58.483 "claimed": false, 00:10:58.483 "zoned": false, 00:10:58.483 "supported_io_types": { 00:10:58.483 "read": true, 00:10:58.483 "write": true, 00:10:58.483 "unmap": true, 00:10:58.483 "flush": true, 00:10:58.483 "reset": true, 00:10:58.483 "nvme_admin": false, 00:10:58.483 "nvme_io": false, 00:10:58.483 "nvme_io_md": false, 00:10:58.483 "write_zeroes": true, 00:10:58.483 "zcopy": true, 00:10:58.483 "get_zone_info": false, 00:10:58.483 "zone_management": false, 00:10:58.483 "zone_append": false, 00:10:58.483 "compare": false, 00:10:58.483 "compare_and_write": false, 00:10:58.483 "abort": true, 00:10:58.483 "seek_hole": false, 00:10:58.483 "seek_data": false, 00:10:58.483 "copy": true, 00:10:58.483 "nvme_iov_md": false 00:10:58.483 }, 00:10:58.483 "memory_domains": [ 00:10:58.483 { 00:10:58.483 "dma_device_id": "system", 00:10:58.483 "dma_device_type": 1 00:10:58.483 }, 00:10:58.483 { 00:10:58.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.483 "dma_device_type": 2 00:10:58.483 } 00:10:58.483 ], 00:10:58.483 "driver_specific": {} 00:10:58.483 } 00:10:58.483 ] 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.483 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.484 [2024-10-08 16:18:51.630353] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.484 [2024-10-08 16:18:51.630431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.484 [2024-10-08 16:18:51.630481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.484 [2024-10-08 16:18:51.633959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.484 "name": "Existed_Raid", 00:10:58.484 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:10:58.484 "strip_size_kb": 64, 00:10:58.484 "state": "configuring", 00:10:58.484 "raid_level": "concat", 00:10:58.484 "superblock": true, 00:10:58.484 "num_base_bdevs": 3, 00:10:58.484 "num_base_bdevs_discovered": 2, 00:10:58.484 "num_base_bdevs_operational": 3, 00:10:58.484 "base_bdevs_list": [ 00:10:58.484 { 00:10:58.484 "name": "BaseBdev1", 00:10:58.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.484 "is_configured": false, 00:10:58.484 "data_offset": 0, 00:10:58.484 "data_size": 0 00:10:58.484 }, 00:10:58.484 { 00:10:58.484 "name": "BaseBdev2", 00:10:58.484 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:10:58.484 "is_configured": true, 00:10:58.484 "data_offset": 2048, 00:10:58.484 "data_size": 63488 00:10:58.484 }, 00:10:58.484 { 00:10:58.484 "name": "BaseBdev3", 00:10:58.484 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:10:58.484 "is_configured": true, 00:10:58.484 "data_offset": 2048, 00:10:58.484 "data_size": 63488 00:10:58.484 } 00:10:58.484 ] 00:10:58.484 }' 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.484 16:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.050 [2024-10-08 16:18:52.162427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.050 "name": "Existed_Raid", 00:10:59.050 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:10:59.050 "strip_size_kb": 64, 00:10:59.050 "state": "configuring", 00:10:59.050 "raid_level": "concat", 00:10:59.050 "superblock": true, 00:10:59.050 "num_base_bdevs": 3, 00:10:59.050 "num_base_bdevs_discovered": 1, 00:10:59.050 "num_base_bdevs_operational": 3, 00:10:59.050 "base_bdevs_list": [ 00:10:59.050 { 00:10:59.050 "name": "BaseBdev1", 00:10:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.050 "is_configured": false, 00:10:59.050 "data_offset": 0, 00:10:59.050 "data_size": 0 00:10:59.050 }, 00:10:59.050 { 00:10:59.050 "name": null, 00:10:59.050 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:10:59.050 "is_configured": false, 00:10:59.050 "data_offset": 0, 00:10:59.050 "data_size": 63488 00:10:59.050 }, 00:10:59.050 { 00:10:59.050 "name": "BaseBdev3", 00:10:59.050 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:10:59.050 "is_configured": true, 00:10:59.050 "data_offset": 2048, 00:10:59.050 "data_size": 63488 00:10:59.050 } 00:10:59.050 ] 00:10:59.050 }' 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.050 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.414 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.414 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.414 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.672 [2024-10-08 16:18:52.777715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.672 BaseBdev1 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.672 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.672 [ 00:10:59.672 { 00:10:59.672 "name": "BaseBdev1", 00:10:59.672 "aliases": [ 00:10:59.672 "c52603c7-2616-4167-bbbf-185ccb7c53d7" 00:10:59.672 ], 00:10:59.672 "product_name": "Malloc disk", 00:10:59.672 "block_size": 512, 00:10:59.672 "num_blocks": 65536, 00:10:59.672 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:10:59.672 "assigned_rate_limits": { 00:10:59.672 "rw_ios_per_sec": 0, 00:10:59.672 "rw_mbytes_per_sec": 0, 00:10:59.672 "r_mbytes_per_sec": 0, 00:10:59.672 "w_mbytes_per_sec": 0 00:10:59.672 }, 00:10:59.672 "claimed": true, 00:10:59.672 "claim_type": "exclusive_write", 00:10:59.672 "zoned": false, 00:10:59.672 "supported_io_types": { 00:10:59.672 "read": true, 00:10:59.672 "write": true, 00:10:59.672 "unmap": true, 00:10:59.672 "flush": true, 00:10:59.673 "reset": true, 00:10:59.673 "nvme_admin": false, 00:10:59.673 "nvme_io": false, 00:10:59.673 "nvme_io_md": false, 00:10:59.673 "write_zeroes": true, 00:10:59.673 "zcopy": true, 00:10:59.673 "get_zone_info": false, 00:10:59.673 "zone_management": false, 00:10:59.673 "zone_append": false, 00:10:59.673 "compare": false, 00:10:59.673 "compare_and_write": false, 00:10:59.673 "abort": true, 00:10:59.673 "seek_hole": false, 00:10:59.673 "seek_data": false, 00:10:59.673 "copy": true, 00:10:59.673 "nvme_iov_md": false 00:10:59.673 }, 00:10:59.673 "memory_domains": [ 00:10:59.673 { 00:10:59.673 "dma_device_id": "system", 00:10:59.673 "dma_device_type": 1 00:10:59.673 }, 00:10:59.673 { 00:10:59.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.673 "dma_device_type": 2 00:10:59.673 } 00:10:59.673 ], 00:10:59.673 "driver_specific": {} 00:10:59.673 } 00:10:59.673 ] 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.673 "name": "Existed_Raid", 00:10:59.673 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:10:59.673 "strip_size_kb": 64, 00:10:59.673 "state": "configuring", 00:10:59.673 "raid_level": "concat", 00:10:59.673 "superblock": true, 00:10:59.673 "num_base_bdevs": 3, 00:10:59.673 "num_base_bdevs_discovered": 2, 00:10:59.673 "num_base_bdevs_operational": 3, 00:10:59.673 "base_bdevs_list": [ 00:10:59.673 { 00:10:59.673 "name": "BaseBdev1", 00:10:59.673 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:10:59.673 "is_configured": true, 00:10:59.673 "data_offset": 2048, 00:10:59.673 "data_size": 63488 00:10:59.673 }, 00:10:59.673 { 00:10:59.673 "name": null, 00:10:59.673 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:10:59.673 "is_configured": false, 00:10:59.673 "data_offset": 0, 00:10:59.673 "data_size": 63488 00:10:59.673 }, 00:10:59.673 { 00:10:59.673 "name": "BaseBdev3", 00:10:59.673 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:10:59.673 "is_configured": true, 00:10:59.673 "data_offset": 2048, 00:10:59.673 "data_size": 63488 00:10:59.673 } 00:10:59.673 ] 00:10:59.673 }' 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.673 16:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.240 [2024-10-08 16:18:53.381918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.240 "name": "Existed_Raid", 00:11:00.240 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:00.240 "strip_size_kb": 64, 00:11:00.240 "state": "configuring", 00:11:00.240 "raid_level": "concat", 00:11:00.240 "superblock": true, 00:11:00.240 "num_base_bdevs": 3, 00:11:00.240 "num_base_bdevs_discovered": 1, 00:11:00.240 "num_base_bdevs_operational": 3, 00:11:00.240 "base_bdevs_list": [ 00:11:00.240 { 00:11:00.240 "name": "BaseBdev1", 00:11:00.240 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:00.240 "is_configured": true, 00:11:00.240 "data_offset": 2048, 00:11:00.240 "data_size": 63488 00:11:00.240 }, 00:11:00.240 { 00:11:00.240 "name": null, 00:11:00.240 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:11:00.240 "is_configured": false, 00:11:00.240 "data_offset": 0, 00:11:00.240 "data_size": 63488 00:11:00.240 }, 00:11:00.240 { 00:11:00.240 "name": null, 00:11:00.240 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:11:00.240 "is_configured": false, 00:11:00.240 "data_offset": 0, 00:11:00.240 "data_size": 63488 00:11:00.240 } 00:11:00.240 ] 00:11:00.240 }' 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.240 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:00.806 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.807 [2024-10-08 16:18:53.946067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.807 16:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.807 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.807 "name": "Existed_Raid", 00:11:00.807 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:00.807 "strip_size_kb": 64, 00:11:00.807 "state": "configuring", 00:11:00.807 "raid_level": "concat", 00:11:00.807 "superblock": true, 00:11:00.807 "num_base_bdevs": 3, 00:11:00.807 "num_base_bdevs_discovered": 2, 00:11:00.807 "num_base_bdevs_operational": 3, 00:11:00.807 "base_bdevs_list": [ 00:11:00.807 { 00:11:00.807 "name": "BaseBdev1", 00:11:00.807 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:00.807 "is_configured": true, 00:11:00.807 "data_offset": 2048, 00:11:00.807 "data_size": 63488 00:11:00.807 }, 00:11:00.807 { 00:11:00.807 "name": null, 00:11:00.807 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:11:00.807 "is_configured": false, 00:11:00.807 "data_offset": 0, 00:11:00.807 "data_size": 63488 00:11:00.807 }, 00:11:00.807 { 00:11:00.807 "name": "BaseBdev3", 00:11:00.807 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:11:00.807 "is_configured": true, 00:11:00.807 "data_offset": 2048, 00:11:00.807 "data_size": 63488 00:11:00.807 } 00:11:00.807 ] 00:11:00.807 }' 00:11:00.807 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.807 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 [2024-10-08 16:18:54.502370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.374 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.374 "name": "Existed_Raid", 00:11:01.374 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:01.374 "strip_size_kb": 64, 00:11:01.374 "state": "configuring", 00:11:01.374 "raid_level": "concat", 00:11:01.374 "superblock": true, 00:11:01.374 "num_base_bdevs": 3, 00:11:01.374 "num_base_bdevs_discovered": 1, 00:11:01.374 "num_base_bdevs_operational": 3, 00:11:01.374 "base_bdevs_list": [ 00:11:01.374 { 00:11:01.374 "name": null, 00:11:01.374 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:01.374 "is_configured": false, 00:11:01.374 "data_offset": 0, 00:11:01.374 "data_size": 63488 00:11:01.374 }, 00:11:01.374 { 00:11:01.375 "name": null, 00:11:01.375 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:11:01.375 "is_configured": false, 00:11:01.375 "data_offset": 0, 00:11:01.375 "data_size": 63488 00:11:01.375 }, 00:11:01.375 { 00:11:01.375 "name": "BaseBdev3", 00:11:01.375 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:11:01.375 "is_configured": true, 00:11:01.375 "data_offset": 2048, 00:11:01.375 "data_size": 63488 00:11:01.375 } 00:11:01.375 ] 00:11:01.375 }' 00:11:01.375 16:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.375 16:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.941 [2024-10-08 16:18:55.163334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.941 "name": "Existed_Raid", 00:11:01.941 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:01.941 "strip_size_kb": 64, 00:11:01.941 "state": "configuring", 00:11:01.941 "raid_level": "concat", 00:11:01.941 "superblock": true, 00:11:01.941 "num_base_bdevs": 3, 00:11:01.941 "num_base_bdevs_discovered": 2, 00:11:01.941 "num_base_bdevs_operational": 3, 00:11:01.941 "base_bdevs_list": [ 00:11:01.941 { 00:11:01.941 "name": null, 00:11:01.941 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:01.941 "is_configured": false, 00:11:01.941 "data_offset": 0, 00:11:01.941 "data_size": 63488 00:11:01.941 }, 00:11:01.941 { 00:11:01.941 "name": "BaseBdev2", 00:11:01.941 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:11:01.941 "is_configured": true, 00:11:01.941 "data_offset": 2048, 00:11:01.941 "data_size": 63488 00:11:01.941 }, 00:11:01.941 { 00:11:01.941 "name": "BaseBdev3", 00:11:01.941 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:11:01.941 "is_configured": true, 00:11:01.941 "data_offset": 2048, 00:11:01.941 "data_size": 63488 00:11:01.941 } 00:11:01.941 ] 00:11:01.941 }' 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.941 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c52603c7-2616-4167-bbbf-185ccb7c53d7 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 [2024-10-08 16:18:55.809442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:02.508 [2024-10-08 16:18:55.809756] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:02.508 [2024-10-08 16:18:55.809784] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:02.508 [2024-10-08 16:18:55.810092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:02.508 NewBaseBdev 00:11:02.508 [2024-10-08 16:18:55.810286] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:02.508 [2024-10-08 16:18:55.810303] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:02.508 [2024-10-08 16:18:55.810476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.508 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 [ 00:11:02.508 { 00:11:02.508 "name": "NewBaseBdev", 00:11:02.508 "aliases": [ 00:11:02.766 "c52603c7-2616-4167-bbbf-185ccb7c53d7" 00:11:02.766 ], 00:11:02.766 "product_name": "Malloc disk", 00:11:02.766 "block_size": 512, 00:11:02.766 "num_blocks": 65536, 00:11:02.766 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:02.766 "assigned_rate_limits": { 00:11:02.766 "rw_ios_per_sec": 0, 00:11:02.766 "rw_mbytes_per_sec": 0, 00:11:02.766 "r_mbytes_per_sec": 0, 00:11:02.766 "w_mbytes_per_sec": 0 00:11:02.766 }, 00:11:02.766 "claimed": true, 00:11:02.766 "claim_type": "exclusive_write", 00:11:02.766 "zoned": false, 00:11:02.766 "supported_io_types": { 00:11:02.766 "read": true, 00:11:02.766 "write": true, 00:11:02.766 "unmap": true, 00:11:02.766 "flush": true, 00:11:02.766 "reset": true, 00:11:02.766 "nvme_admin": false, 00:11:02.766 "nvme_io": false, 00:11:02.766 "nvme_io_md": false, 00:11:02.766 "write_zeroes": true, 00:11:02.766 "zcopy": true, 00:11:02.766 "get_zone_info": false, 00:11:02.766 "zone_management": false, 00:11:02.766 "zone_append": false, 00:11:02.767 "compare": false, 00:11:02.767 "compare_and_write": false, 00:11:02.767 "abort": true, 00:11:02.767 "seek_hole": false, 00:11:02.767 "seek_data": false, 00:11:02.767 "copy": true, 00:11:02.767 "nvme_iov_md": false 00:11:02.767 }, 00:11:02.767 "memory_domains": [ 00:11:02.767 { 00:11:02.767 "dma_device_id": "system", 00:11:02.767 "dma_device_type": 1 00:11:02.767 }, 00:11:02.767 { 00:11:02.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.767 "dma_device_type": 2 00:11:02.767 } 00:11:02.767 ], 00:11:02.767 "driver_specific": {} 00:11:02.767 } 00:11:02.767 ] 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.767 "name": "Existed_Raid", 00:11:02.767 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:02.767 "strip_size_kb": 64, 00:11:02.767 "state": "online", 00:11:02.767 "raid_level": "concat", 00:11:02.767 "superblock": true, 00:11:02.767 "num_base_bdevs": 3, 00:11:02.767 "num_base_bdevs_discovered": 3, 00:11:02.767 "num_base_bdevs_operational": 3, 00:11:02.767 "base_bdevs_list": [ 00:11:02.767 { 00:11:02.767 "name": "NewBaseBdev", 00:11:02.767 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:02.767 "is_configured": true, 00:11:02.767 "data_offset": 2048, 00:11:02.767 "data_size": 63488 00:11:02.767 }, 00:11:02.767 { 00:11:02.767 "name": "BaseBdev2", 00:11:02.767 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:11:02.767 "is_configured": true, 00:11:02.767 "data_offset": 2048, 00:11:02.767 "data_size": 63488 00:11:02.767 }, 00:11:02.767 { 00:11:02.767 "name": "BaseBdev3", 00:11:02.767 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:11:02.767 "is_configured": true, 00:11:02.767 "data_offset": 2048, 00:11:02.767 "data_size": 63488 00:11:02.767 } 00:11:02.767 ] 00:11:02.767 }' 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.767 16:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.025 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.025 [2024-10-08 16:18:56.338066] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.284 "name": "Existed_Raid", 00:11:03.284 "aliases": [ 00:11:03.284 "a6123312-6d59-44d4-ada9-4b4e09a4c438" 00:11:03.284 ], 00:11:03.284 "product_name": "Raid Volume", 00:11:03.284 "block_size": 512, 00:11:03.284 "num_blocks": 190464, 00:11:03.284 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:03.284 "assigned_rate_limits": { 00:11:03.284 "rw_ios_per_sec": 0, 00:11:03.284 "rw_mbytes_per_sec": 0, 00:11:03.284 "r_mbytes_per_sec": 0, 00:11:03.284 "w_mbytes_per_sec": 0 00:11:03.284 }, 00:11:03.284 "claimed": false, 00:11:03.284 "zoned": false, 00:11:03.284 "supported_io_types": { 00:11:03.284 "read": true, 00:11:03.284 "write": true, 00:11:03.284 "unmap": true, 00:11:03.284 "flush": true, 00:11:03.284 "reset": true, 00:11:03.284 "nvme_admin": false, 00:11:03.284 "nvme_io": false, 00:11:03.284 "nvme_io_md": false, 00:11:03.284 "write_zeroes": true, 00:11:03.284 "zcopy": false, 00:11:03.284 "get_zone_info": false, 00:11:03.284 "zone_management": false, 00:11:03.284 "zone_append": false, 00:11:03.284 "compare": false, 00:11:03.284 "compare_and_write": false, 00:11:03.284 "abort": false, 00:11:03.284 "seek_hole": false, 00:11:03.284 "seek_data": false, 00:11:03.284 "copy": false, 00:11:03.284 "nvme_iov_md": false 00:11:03.284 }, 00:11:03.284 "memory_domains": [ 00:11:03.284 { 00:11:03.284 "dma_device_id": "system", 00:11:03.284 "dma_device_type": 1 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.284 "dma_device_type": 2 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "dma_device_id": "system", 00:11:03.284 "dma_device_type": 1 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.284 "dma_device_type": 2 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "dma_device_id": "system", 00:11:03.284 "dma_device_type": 1 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.284 "dma_device_type": 2 00:11:03.284 } 00:11:03.284 ], 00:11:03.284 "driver_specific": { 00:11:03.284 "raid": { 00:11:03.284 "uuid": "a6123312-6d59-44d4-ada9-4b4e09a4c438", 00:11:03.284 "strip_size_kb": 64, 00:11:03.284 "state": "online", 00:11:03.284 "raid_level": "concat", 00:11:03.284 "superblock": true, 00:11:03.284 "num_base_bdevs": 3, 00:11:03.284 "num_base_bdevs_discovered": 3, 00:11:03.284 "num_base_bdevs_operational": 3, 00:11:03.284 "base_bdevs_list": [ 00:11:03.284 { 00:11:03.284 "name": "NewBaseBdev", 00:11:03.284 "uuid": "c52603c7-2616-4167-bbbf-185ccb7c53d7", 00:11:03.284 "is_configured": true, 00:11:03.284 "data_offset": 2048, 00:11:03.284 "data_size": 63488 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "name": "BaseBdev2", 00:11:03.284 "uuid": "9d6c5c18-e8d7-4ca1-9e5b-69dd1ff02dd5", 00:11:03.284 "is_configured": true, 00:11:03.284 "data_offset": 2048, 00:11:03.284 "data_size": 63488 00:11:03.284 }, 00:11:03.284 { 00:11:03.284 "name": "BaseBdev3", 00:11:03.284 "uuid": "d9002801-2741-4080-8b1a-95abfb6e0cce", 00:11:03.284 "is_configured": true, 00:11:03.284 "data_offset": 2048, 00:11:03.284 "data_size": 63488 00:11:03.284 } 00:11:03.284 ] 00:11:03.284 } 00:11:03.284 } 00:11:03.284 }' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:03.284 BaseBdev2 00:11:03.284 BaseBdev3' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.284 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.285 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.285 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.285 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.285 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.285 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.285 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.542 [2024-10-08 16:18:56.633754] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.542 [2024-10-08 16:18:56.633818] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.542 [2024-10-08 16:18:56.633946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.542 [2024-10-08 16:18:56.634026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.542 [2024-10-08 16:18:56.634047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66573 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66573 ']' 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66573 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66573 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.542 killing process with pid 66573 00:11:03.542 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66573' 00:11:03.543 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66573 00:11:03.543 [2024-10-08 16:18:56.670774] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.543 16:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66573 00:11:03.800 [2024-10-08 16:18:56.941970] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.240 16:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:05.240 00:11:05.240 real 0m11.911s 00:11:05.240 user 0m19.614s 00:11:05.240 sys 0m1.691s 00:11:05.240 16:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.240 16:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.240 ************************************ 00:11:05.240 END TEST raid_state_function_test_sb 00:11:05.240 ************************************ 00:11:05.240 16:18:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:05.240 16:18:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.240 16:18:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.240 16:18:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.240 ************************************ 00:11:05.240 START TEST raid_superblock_test 00:11:05.240 ************************************ 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67211 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67211 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 67211 ']' 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.240 16:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.240 [2024-10-08 16:18:58.292058] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:05.240 [2024-10-08 16:18:58.292244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67211 ] 00:11:05.240 [2024-10-08 16:18:58.466073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.500 [2024-10-08 16:18:58.786975] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.757 [2024-10-08 16:18:59.013960] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.757 [2024-10-08 16:18:59.014059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.015 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.320 malloc1 00:11:06.320 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.320 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.320 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.320 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.320 [2024-10-08 16:18:59.391784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.320 [2024-10-08 16:18:59.391873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.320 [2024-10-08 16:18:59.391911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:06.320 [2024-10-08 16:18:59.391931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.320 [2024-10-08 16:18:59.394944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.320 [2024-10-08 16:18:59.395001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.320 pt1 00:11:06.320 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.321 malloc2 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.321 [2024-10-08 16:18:59.456127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.321 [2024-10-08 16:18:59.456363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.321 [2024-10-08 16:18:59.456410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:06.321 [2024-10-08 16:18:59.456427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.321 [2024-10-08 16:18:59.459523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.321 [2024-10-08 16:18:59.459757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.321 pt2 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.321 malloc3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.321 [2024-10-08 16:18:59.512569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.321 [2024-10-08 16:18:59.512802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.321 [2024-10-08 16:18:59.512851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:06.321 [2024-10-08 16:18:59.512869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.321 [2024-10-08 16:18:59.515885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.321 [2024-10-08 16:18:59.516054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.321 pt3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.321 [2024-10-08 16:18:59.520770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.321 [2024-10-08 16:18:59.523418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.321 [2024-10-08 16:18:59.523689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.321 [2024-10-08 16:18:59.523929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:06.321 [2024-10-08 16:18:59.523954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:06.321 [2024-10-08 16:18:59.524278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:06.321 [2024-10-08 16:18:59.524501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:06.321 [2024-10-08 16:18:59.524519] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:06.321 [2024-10-08 16:18:59.524810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.321 "name": "raid_bdev1", 00:11:06.321 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:06.321 "strip_size_kb": 64, 00:11:06.321 "state": "online", 00:11:06.321 "raid_level": "concat", 00:11:06.321 "superblock": true, 00:11:06.321 "num_base_bdevs": 3, 00:11:06.321 "num_base_bdevs_discovered": 3, 00:11:06.321 "num_base_bdevs_operational": 3, 00:11:06.321 "base_bdevs_list": [ 00:11:06.321 { 00:11:06.321 "name": "pt1", 00:11:06.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.321 "is_configured": true, 00:11:06.321 "data_offset": 2048, 00:11:06.321 "data_size": 63488 00:11:06.321 }, 00:11:06.321 { 00:11:06.321 "name": "pt2", 00:11:06.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.321 "is_configured": true, 00:11:06.321 "data_offset": 2048, 00:11:06.321 "data_size": 63488 00:11:06.321 }, 00:11:06.321 { 00:11:06.321 "name": "pt3", 00:11:06.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.321 "is_configured": true, 00:11:06.321 "data_offset": 2048, 00:11:06.321 "data_size": 63488 00:11:06.321 } 00:11:06.321 ] 00:11:06.321 }' 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.321 16:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.896 [2024-10-08 16:19:00.057481] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.896 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.896 "name": "raid_bdev1", 00:11:06.896 "aliases": [ 00:11:06.896 "c7c17acb-910a-41aa-93c7-0e078594c2ad" 00:11:06.896 ], 00:11:06.896 "product_name": "Raid Volume", 00:11:06.896 "block_size": 512, 00:11:06.896 "num_blocks": 190464, 00:11:06.896 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:06.896 "assigned_rate_limits": { 00:11:06.896 "rw_ios_per_sec": 0, 00:11:06.896 "rw_mbytes_per_sec": 0, 00:11:06.896 "r_mbytes_per_sec": 0, 00:11:06.896 "w_mbytes_per_sec": 0 00:11:06.896 }, 00:11:06.896 "claimed": false, 00:11:06.896 "zoned": false, 00:11:06.896 "supported_io_types": { 00:11:06.896 "read": true, 00:11:06.896 "write": true, 00:11:06.896 "unmap": true, 00:11:06.896 "flush": true, 00:11:06.896 "reset": true, 00:11:06.896 "nvme_admin": false, 00:11:06.896 "nvme_io": false, 00:11:06.896 "nvme_io_md": false, 00:11:06.896 "write_zeroes": true, 00:11:06.896 "zcopy": false, 00:11:06.896 "get_zone_info": false, 00:11:06.896 "zone_management": false, 00:11:06.896 "zone_append": false, 00:11:06.896 "compare": false, 00:11:06.896 "compare_and_write": false, 00:11:06.896 "abort": false, 00:11:06.896 "seek_hole": false, 00:11:06.896 "seek_data": false, 00:11:06.896 "copy": false, 00:11:06.896 "nvme_iov_md": false 00:11:06.896 }, 00:11:06.896 "memory_domains": [ 00:11:06.896 { 00:11:06.896 "dma_device_id": "system", 00:11:06.896 "dma_device_type": 1 00:11:06.896 }, 00:11:06.896 { 00:11:06.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.896 "dma_device_type": 2 00:11:06.896 }, 00:11:06.896 { 00:11:06.896 "dma_device_id": "system", 00:11:06.896 "dma_device_type": 1 00:11:06.896 }, 00:11:06.897 { 00:11:06.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.897 "dma_device_type": 2 00:11:06.897 }, 00:11:06.897 { 00:11:06.897 "dma_device_id": "system", 00:11:06.897 "dma_device_type": 1 00:11:06.897 }, 00:11:06.897 { 00:11:06.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.897 "dma_device_type": 2 00:11:06.897 } 00:11:06.897 ], 00:11:06.897 "driver_specific": { 00:11:06.897 "raid": { 00:11:06.897 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:06.897 "strip_size_kb": 64, 00:11:06.897 "state": "online", 00:11:06.897 "raid_level": "concat", 00:11:06.897 "superblock": true, 00:11:06.897 "num_base_bdevs": 3, 00:11:06.897 "num_base_bdevs_discovered": 3, 00:11:06.897 "num_base_bdevs_operational": 3, 00:11:06.897 "base_bdevs_list": [ 00:11:06.897 { 00:11:06.897 "name": "pt1", 00:11:06.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.897 "is_configured": true, 00:11:06.897 "data_offset": 2048, 00:11:06.897 "data_size": 63488 00:11:06.897 }, 00:11:06.897 { 00:11:06.897 "name": "pt2", 00:11:06.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.897 "is_configured": true, 00:11:06.897 "data_offset": 2048, 00:11:06.897 "data_size": 63488 00:11:06.897 }, 00:11:06.897 { 00:11:06.897 "name": "pt3", 00:11:06.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.897 "is_configured": true, 00:11:06.897 "data_offset": 2048, 00:11:06.897 "data_size": 63488 00:11:06.897 } 00:11:06.897 ] 00:11:06.897 } 00:11:06.897 } 00:11:06.897 }' 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:06.897 pt2 00:11:06.897 pt3' 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:07.155 [2024-10-08 16:19:00.373395] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c7c17acb-910a-41aa-93c7-0e078594c2ad 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c7c17acb-910a-41aa-93c7-0e078594c2ad ']' 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.155 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.156 [2024-10-08 16:19:00.425005] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.156 [2024-10-08 16:19:00.425045] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.156 [2024-10-08 16:19:00.425141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.156 [2024-10-08 16:19:00.425228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.156 [2024-10-08 16:19:00.425248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:07.156 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.156 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.156 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:07.156 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.156 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.156 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.414 [2024-10-08 16:19:00.569070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:07.414 [2024-10-08 16:19:00.571793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:07.414 [2024-10-08 16:19:00.571910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:07.414 [2024-10-08 16:19:00.572105] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:07.414 [2024-10-08 16:19:00.572415] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:07.414 [2024-10-08 16:19:00.572625] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:07.414 [2024-10-08 16:19:00.572873] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.414 [2024-10-08 16:19:00.572924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:07.414 request: 00:11:07.414 { 00:11:07.414 "name": "raid_bdev1", 00:11:07.414 "raid_level": "concat", 00:11:07.414 "base_bdevs": [ 00:11:07.414 "malloc1", 00:11:07.414 "malloc2", 00:11:07.414 "malloc3" 00:11:07.414 ], 00:11:07.414 "strip_size_kb": 64, 00:11:07.414 "superblock": false, 00:11:07.414 "method": "bdev_raid_create", 00:11:07.414 "req_id": 1 00:11:07.414 } 00:11:07.414 Got JSON-RPC error response 00:11:07.414 response: 00:11:07.414 { 00:11:07.414 "code": -17, 00:11:07.414 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:07.414 } 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:07.414 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 [2024-10-08 16:19:00.637356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.415 [2024-10-08 16:19:00.637442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.415 [2024-10-08 16:19:00.637474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:07.415 [2024-10-08 16:19:00.637490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.415 [2024-10-08 16:19:00.640354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.415 [2024-10-08 16:19:00.640604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.415 [2024-10-08 16:19:00.640738] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.415 [2024-10-08 16:19:00.640810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.415 pt1 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.415 "name": "raid_bdev1", 00:11:07.415 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:07.415 "strip_size_kb": 64, 00:11:07.415 "state": "configuring", 00:11:07.415 "raid_level": "concat", 00:11:07.415 "superblock": true, 00:11:07.415 "num_base_bdevs": 3, 00:11:07.415 "num_base_bdevs_discovered": 1, 00:11:07.415 "num_base_bdevs_operational": 3, 00:11:07.415 "base_bdevs_list": [ 00:11:07.415 { 00:11:07.415 "name": "pt1", 00:11:07.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.415 "is_configured": true, 00:11:07.415 "data_offset": 2048, 00:11:07.415 "data_size": 63488 00:11:07.415 }, 00:11:07.415 { 00:11:07.415 "name": null, 00:11:07.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.415 "is_configured": false, 00:11:07.415 "data_offset": 2048, 00:11:07.415 "data_size": 63488 00:11:07.415 }, 00:11:07.415 { 00:11:07.415 "name": null, 00:11:07.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.415 "is_configured": false, 00:11:07.415 "data_offset": 2048, 00:11:07.415 "data_size": 63488 00:11:07.415 } 00:11:07.415 ] 00:11:07.415 }' 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.415 16:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.980 [2024-10-08 16:19:01.153505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.980 [2024-10-08 16:19:01.153874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.980 [2024-10-08 16:19:01.153928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:07.980 [2024-10-08 16:19:01.153947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.980 [2024-10-08 16:19:01.154541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.980 [2024-10-08 16:19:01.154591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.980 [2024-10-08 16:19:01.154705] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:07.980 [2024-10-08 16:19:01.154772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.980 pt2 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.980 [2024-10-08 16:19:01.161462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.980 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.980 "name": "raid_bdev1", 00:11:07.980 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:07.980 "strip_size_kb": 64, 00:11:07.980 "state": "configuring", 00:11:07.980 "raid_level": "concat", 00:11:07.980 "superblock": true, 00:11:07.980 "num_base_bdevs": 3, 00:11:07.980 "num_base_bdevs_discovered": 1, 00:11:07.980 "num_base_bdevs_operational": 3, 00:11:07.980 "base_bdevs_list": [ 00:11:07.980 { 00:11:07.980 "name": "pt1", 00:11:07.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.980 "is_configured": true, 00:11:07.980 "data_offset": 2048, 00:11:07.980 "data_size": 63488 00:11:07.980 }, 00:11:07.980 { 00:11:07.980 "name": null, 00:11:07.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.980 "is_configured": false, 00:11:07.981 "data_offset": 0, 00:11:07.981 "data_size": 63488 00:11:07.981 }, 00:11:07.981 { 00:11:07.981 "name": null, 00:11:07.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.981 "is_configured": false, 00:11:07.981 "data_offset": 2048, 00:11:07.981 "data_size": 63488 00:11:07.981 } 00:11:07.981 ] 00:11:07.981 }' 00:11:07.981 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.981 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.546 [2024-10-08 16:19:01.697641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.546 [2024-10-08 16:19:01.697767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.546 [2024-10-08 16:19:01.697797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:08.546 [2024-10-08 16:19:01.697816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.546 [2024-10-08 16:19:01.698412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.546 [2024-10-08 16:19:01.698453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.546 [2024-10-08 16:19:01.698579] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.546 [2024-10-08 16:19:01.698635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.546 pt2 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.546 [2024-10-08 16:19:01.709598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.546 [2024-10-08 16:19:01.709892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.546 [2024-10-08 16:19:01.709959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.546 [2024-10-08 16:19:01.710094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.546 [2024-10-08 16:19:01.710601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.546 [2024-10-08 16:19:01.710771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.546 [2024-10-08 16:19:01.711021] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:08.546 [2024-10-08 16:19:01.711178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.546 [2024-10-08 16:19:01.711452] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.546 [2024-10-08 16:19:01.711610] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.546 [2024-10-08 16:19:01.711984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:08.546 [2024-10-08 16:19:01.712309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.546 [2024-10-08 16:19:01.712421] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:08.546 [2024-10-08 16:19:01.712812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.546 pt3 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.546 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.546 "name": "raid_bdev1", 00:11:08.546 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:08.546 "strip_size_kb": 64, 00:11:08.546 "state": "online", 00:11:08.546 "raid_level": "concat", 00:11:08.546 "superblock": true, 00:11:08.546 "num_base_bdevs": 3, 00:11:08.547 "num_base_bdevs_discovered": 3, 00:11:08.547 "num_base_bdevs_operational": 3, 00:11:08.547 "base_bdevs_list": [ 00:11:08.547 { 00:11:08.547 "name": "pt1", 00:11:08.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.547 "is_configured": true, 00:11:08.547 "data_offset": 2048, 00:11:08.547 "data_size": 63488 00:11:08.547 }, 00:11:08.547 { 00:11:08.547 "name": "pt2", 00:11:08.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.547 "is_configured": true, 00:11:08.547 "data_offset": 2048, 00:11:08.547 "data_size": 63488 00:11:08.547 }, 00:11:08.547 { 00:11:08.547 "name": "pt3", 00:11:08.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.547 "is_configured": true, 00:11:08.547 "data_offset": 2048, 00:11:08.547 "data_size": 63488 00:11:08.547 } 00:11:08.547 ] 00:11:08.547 }' 00:11:08.547 16:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.547 16:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 [2024-10-08 16:19:02.250170] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.111 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.111 "name": "raid_bdev1", 00:11:09.111 "aliases": [ 00:11:09.111 "c7c17acb-910a-41aa-93c7-0e078594c2ad" 00:11:09.112 ], 00:11:09.112 "product_name": "Raid Volume", 00:11:09.112 "block_size": 512, 00:11:09.112 "num_blocks": 190464, 00:11:09.112 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:09.112 "assigned_rate_limits": { 00:11:09.112 "rw_ios_per_sec": 0, 00:11:09.112 "rw_mbytes_per_sec": 0, 00:11:09.112 "r_mbytes_per_sec": 0, 00:11:09.112 "w_mbytes_per_sec": 0 00:11:09.112 }, 00:11:09.112 "claimed": false, 00:11:09.112 "zoned": false, 00:11:09.112 "supported_io_types": { 00:11:09.112 "read": true, 00:11:09.112 "write": true, 00:11:09.112 "unmap": true, 00:11:09.112 "flush": true, 00:11:09.112 "reset": true, 00:11:09.112 "nvme_admin": false, 00:11:09.112 "nvme_io": false, 00:11:09.112 "nvme_io_md": false, 00:11:09.112 "write_zeroes": true, 00:11:09.112 "zcopy": false, 00:11:09.112 "get_zone_info": false, 00:11:09.112 "zone_management": false, 00:11:09.112 "zone_append": false, 00:11:09.112 "compare": false, 00:11:09.112 "compare_and_write": false, 00:11:09.112 "abort": false, 00:11:09.112 "seek_hole": false, 00:11:09.112 "seek_data": false, 00:11:09.112 "copy": false, 00:11:09.112 "nvme_iov_md": false 00:11:09.112 }, 00:11:09.112 "memory_domains": [ 00:11:09.112 { 00:11:09.112 "dma_device_id": "system", 00:11:09.112 "dma_device_type": 1 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.112 "dma_device_type": 2 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "dma_device_id": "system", 00:11:09.112 "dma_device_type": 1 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.112 "dma_device_type": 2 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "dma_device_id": "system", 00:11:09.112 "dma_device_type": 1 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.112 "dma_device_type": 2 00:11:09.112 } 00:11:09.112 ], 00:11:09.112 "driver_specific": { 00:11:09.112 "raid": { 00:11:09.112 "uuid": "c7c17acb-910a-41aa-93c7-0e078594c2ad", 00:11:09.112 "strip_size_kb": 64, 00:11:09.112 "state": "online", 00:11:09.112 "raid_level": "concat", 00:11:09.112 "superblock": true, 00:11:09.112 "num_base_bdevs": 3, 00:11:09.112 "num_base_bdevs_discovered": 3, 00:11:09.112 "num_base_bdevs_operational": 3, 00:11:09.112 "base_bdevs_list": [ 00:11:09.112 { 00:11:09.112 "name": "pt1", 00:11:09.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.112 "is_configured": true, 00:11:09.112 "data_offset": 2048, 00:11:09.112 "data_size": 63488 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "name": "pt2", 00:11:09.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.112 "is_configured": true, 00:11:09.112 "data_offset": 2048, 00:11:09.112 "data_size": 63488 00:11:09.112 }, 00:11:09.112 { 00:11:09.112 "name": "pt3", 00:11:09.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.112 "is_configured": true, 00:11:09.112 "data_offset": 2048, 00:11:09.112 "data_size": 63488 00:11:09.112 } 00:11:09.112 ] 00:11:09.112 } 00:11:09.112 } 00:11:09.112 }' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.112 pt2 00:11:09.112 pt3' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.112 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.370 [2024-10-08 16:19:02.546193] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c7c17acb-910a-41aa-93c7-0e078594c2ad '!=' c7c17acb-910a-41aa-93c7-0e078594c2ad ']' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67211 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 67211 ']' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 67211 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67211 00:11:09.370 killing process with pid 67211 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67211' 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 67211 00:11:09.370 [2024-10-08 16:19:02.612942] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.370 16:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 67211 00:11:09.370 [2024-10-08 16:19:02.613082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.370 [2024-10-08 16:19:02.613166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.370 [2024-10-08 16:19:02.613187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:09.628 [2024-10-08 16:19:02.884874] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.002 ************************************ 00:11:11.002 END TEST raid_superblock_test 00:11:11.002 ************************************ 00:11:11.002 16:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:11.002 00:11:11.002 real 0m5.898s 00:11:11.002 user 0m8.733s 00:11:11.002 sys 0m0.916s 00:11:11.002 16:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.002 16:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.002 16:19:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:11.002 16:19:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:11.002 16:19:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.002 16:19:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.002 ************************************ 00:11:11.002 START TEST raid_read_error_test 00:11:11.002 ************************************ 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zxH1nCYIDt 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67469 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67469 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67469 ']' 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.002 16:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.002 [2024-10-08 16:19:04.258107] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:11.002 [2024-10-08 16:19:04.258426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67469 ] 00:11:11.260 [2024-10-08 16:19:04.420251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.518 [2024-10-08 16:19:04.658768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.777 [2024-10-08 16:19:04.861633] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.777 [2024-10-08 16:19:04.861724] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.035 BaseBdev1_malloc 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.035 true 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.035 [2024-10-08 16:19:05.324542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.035 [2024-10-08 16:19:05.324639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.035 [2024-10-08 16:19:05.324675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.035 [2024-10-08 16:19:05.324694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.035 [2024-10-08 16:19:05.327467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.035 [2024-10-08 16:19:05.327743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.035 BaseBdev1 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.035 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.036 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.036 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 BaseBdev2_malloc 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 true 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 [2024-10-08 16:19:05.393827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.293 [2024-10-08 16:19:05.393925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.293 [2024-10-08 16:19:05.393967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.293 [2024-10-08 16:19:05.393985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.293 [2024-10-08 16:19:05.396815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.293 [2024-10-08 16:19:05.397152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.293 BaseBdev2 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 BaseBdev3_malloc 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 true 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 [2024-10-08 16:19:05.457515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.293 [2024-10-08 16:19:05.457599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.293 [2024-10-08 16:19:05.457625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.293 [2024-10-08 16:19:05.457642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.293 [2024-10-08 16:19:05.460362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.293 [2024-10-08 16:19:05.460657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.293 BaseBdev3 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 [2024-10-08 16:19:05.465624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.293 [2024-10-08 16:19:05.467914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.293 [2024-10-08 16:19:05.468078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.293 [2024-10-08 16:19:05.468363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.293 [2024-10-08 16:19:05.468383] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:12.293 [2024-10-08 16:19:05.468751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:12.293 [2024-10-08 16:19:05.468987] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.293 [2024-10-08 16:19:05.469016] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:12.293 [2024-10-08 16:19:05.469195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.293 "name": "raid_bdev1", 00:11:12.293 "uuid": "f5ed3b5e-cc31-4581-a093-55fb9b035488", 00:11:12.293 "strip_size_kb": 64, 00:11:12.293 "state": "online", 00:11:12.293 "raid_level": "concat", 00:11:12.293 "superblock": true, 00:11:12.293 "num_base_bdevs": 3, 00:11:12.293 "num_base_bdevs_discovered": 3, 00:11:12.293 "num_base_bdevs_operational": 3, 00:11:12.293 "base_bdevs_list": [ 00:11:12.293 { 00:11:12.293 "name": "BaseBdev1", 00:11:12.293 "uuid": "30964e4c-2b02-5879-a450-c761b15e05ec", 00:11:12.293 "is_configured": true, 00:11:12.293 "data_offset": 2048, 00:11:12.293 "data_size": 63488 00:11:12.293 }, 00:11:12.293 { 00:11:12.293 "name": "BaseBdev2", 00:11:12.293 "uuid": "0fddc4a1-3905-595b-b332-b946fefda773", 00:11:12.293 "is_configured": true, 00:11:12.293 "data_offset": 2048, 00:11:12.293 "data_size": 63488 00:11:12.293 }, 00:11:12.293 { 00:11:12.293 "name": "BaseBdev3", 00:11:12.293 "uuid": "edc05579-2ae3-5270-a746-3baf1ccae59b", 00:11:12.293 "is_configured": true, 00:11:12.293 "data_offset": 2048, 00:11:12.293 "data_size": 63488 00:11:12.293 } 00:11:12.293 ] 00:11:12.293 }' 00:11:12.293 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.294 16:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.858 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:12.858 16:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.858 [2024-10-08 16:19:06.123161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:13.789 16:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:13.789 16:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.789 16:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.789 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.790 "name": "raid_bdev1", 00:11:13.790 "uuid": "f5ed3b5e-cc31-4581-a093-55fb9b035488", 00:11:13.790 "strip_size_kb": 64, 00:11:13.790 "state": "online", 00:11:13.790 "raid_level": "concat", 00:11:13.790 "superblock": true, 00:11:13.790 "num_base_bdevs": 3, 00:11:13.790 "num_base_bdevs_discovered": 3, 00:11:13.790 "num_base_bdevs_operational": 3, 00:11:13.790 "base_bdevs_list": [ 00:11:13.790 { 00:11:13.790 "name": "BaseBdev1", 00:11:13.790 "uuid": "30964e4c-2b02-5879-a450-c761b15e05ec", 00:11:13.790 "is_configured": true, 00:11:13.790 "data_offset": 2048, 00:11:13.790 "data_size": 63488 00:11:13.790 }, 00:11:13.790 { 00:11:13.790 "name": "BaseBdev2", 00:11:13.790 "uuid": "0fddc4a1-3905-595b-b332-b946fefda773", 00:11:13.790 "is_configured": true, 00:11:13.790 "data_offset": 2048, 00:11:13.790 "data_size": 63488 00:11:13.790 }, 00:11:13.790 { 00:11:13.790 "name": "BaseBdev3", 00:11:13.790 "uuid": "edc05579-2ae3-5270-a746-3baf1ccae59b", 00:11:13.790 "is_configured": true, 00:11:13.790 "data_offset": 2048, 00:11:13.790 "data_size": 63488 00:11:13.790 } 00:11:13.790 ] 00:11:13.790 }' 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.790 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.356 [2024-10-08 16:19:07.526100] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.356 [2024-10-08 16:19:07.526385] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.356 [2024-10-08 16:19:07.530083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.356 [2024-10-08 16:19:07.530321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.356 [2024-10-08 16:19:07.530508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.356 [2024-10-08 16:19:07.530688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:14.356 { 00:11:14.356 "results": [ 00:11:14.356 { 00:11:14.356 "job": "raid_bdev1", 00:11:14.356 "core_mask": "0x1", 00:11:14.356 "workload": "randrw", 00:11:14.356 "percentage": 50, 00:11:14.356 "status": "finished", 00:11:14.356 "queue_depth": 1, 00:11:14.356 "io_size": 131072, 00:11:14.356 "runtime": 1.400355, 00:11:14.356 "iops": 10881.52647007366, 00:11:14.356 "mibps": 1360.1908087592076, 00:11:14.356 "io_failed": 1, 00:11:14.356 "io_timeout": 0, 00:11:14.356 "avg_latency_us": 128.19076794588048, 00:11:14.356 "min_latency_us": 40.02909090909091, 00:11:14.356 "max_latency_us": 1817.1345454545456 00:11:14.356 } 00:11:14.356 ], 00:11:14.356 "core_count": 1 00:11:14.356 } 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67469 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67469 ']' 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67469 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67469 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67469' 00:11:14.356 killing process with pid 67469 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67469 00:11:14.356 [2024-10-08 16:19:07.574587] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.356 16:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67469 00:11:14.613 [2024-10-08 16:19:07.781606] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zxH1nCYIDt 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.983 ************************************ 00:11:15.983 END TEST raid_read_error_test 00:11:15.983 ************************************ 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:15.983 00:11:15.983 real 0m4.898s 00:11:15.983 user 0m6.027s 00:11:15.983 sys 0m0.601s 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.983 16:19:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.983 16:19:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:15.983 16:19:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:15.983 16:19:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.983 16:19:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.983 ************************************ 00:11:15.983 START TEST raid_write_error_test 00:11:15.983 ************************************ 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.983 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.32vRgfLUtX 00:11:15.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67615 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67615 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67615 ']' 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.984 16:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.984 [2024-10-08 16:19:09.225806] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:15.984 [2024-10-08 16:19:09.226506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67615 ] 00:11:16.241 [2024-10-08 16:19:09.402330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.499 [2024-10-08 16:19:09.643264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.756 [2024-10-08 16:19:09.846809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.756 [2024-10-08 16:19:09.847129] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.014 BaseBdev1_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.014 true 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.014 [2024-10-08 16:19:10.233670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.014 [2024-10-08 16:19:10.233772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.014 [2024-10-08 16:19:10.233803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.014 [2024-10-08 16:19:10.233822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.014 [2024-10-08 16:19:10.236725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.014 [2024-10-08 16:19:10.237025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.014 BaseBdev1 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.014 BaseBdev2_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.014 true 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.014 [2024-10-08 16:19:10.303451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.014 [2024-10-08 16:19:10.303557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.014 [2024-10-08 16:19:10.303589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.014 [2024-10-08 16:19:10.303608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.014 [2024-10-08 16:19:10.306388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.014 [2024-10-08 16:19:10.306442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.014 BaseBdev2 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.014 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 BaseBdev3_malloc 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 true 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 [2024-10-08 16:19:10.363647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.272 [2024-10-08 16:19:10.363758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.272 [2024-10-08 16:19:10.363789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.272 [2024-10-08 16:19:10.363808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.272 [2024-10-08 16:19:10.366757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.272 [2024-10-08 16:19:10.366816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.272 BaseBdev3 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 [2024-10-08 16:19:10.371746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.272 [2024-10-08 16:19:10.374229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.272 [2024-10-08 16:19:10.374343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.272 [2024-10-08 16:19:10.374640] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.272 [2024-10-08 16:19:10.374667] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:17.272 [2024-10-08 16:19:10.375043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.272 [2024-10-08 16:19:10.375260] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.272 [2024-10-08 16:19:10.375281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:17.272 [2024-10-08 16:19:10.375476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.272 "name": "raid_bdev1", 00:11:17.272 "uuid": "72e9aaa5-ab88-407c-b11d-5637fb6bcb50", 00:11:17.272 "strip_size_kb": 64, 00:11:17.272 "state": "online", 00:11:17.272 "raid_level": "concat", 00:11:17.272 "superblock": true, 00:11:17.272 "num_base_bdevs": 3, 00:11:17.272 "num_base_bdevs_discovered": 3, 00:11:17.272 "num_base_bdevs_operational": 3, 00:11:17.272 "base_bdevs_list": [ 00:11:17.272 { 00:11:17.272 "name": "BaseBdev1", 00:11:17.272 "uuid": "7a809e49-3703-547b-bea9-ac1d3a47b651", 00:11:17.272 "is_configured": true, 00:11:17.272 "data_offset": 2048, 00:11:17.272 "data_size": 63488 00:11:17.272 }, 00:11:17.272 { 00:11:17.272 "name": "BaseBdev2", 00:11:17.272 "uuid": "6ecb400f-2d88-5f81-b7d3-4bb76d2057d4", 00:11:17.272 "is_configured": true, 00:11:17.272 "data_offset": 2048, 00:11:17.272 "data_size": 63488 00:11:17.272 }, 00:11:17.272 { 00:11:17.272 "name": "BaseBdev3", 00:11:17.272 "uuid": "6b44a8c2-0539-541e-903a-e1ed5a4cee5c", 00:11:17.272 "is_configured": true, 00:11:17.272 "data_offset": 2048, 00:11:17.272 "data_size": 63488 00:11:17.272 } 00:11:17.272 ] 00:11:17.272 }' 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.272 16:19:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.529 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:17.529 16:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:17.786 [2024-10-08 16:19:10.961327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.735 "name": "raid_bdev1", 00:11:18.735 "uuid": "72e9aaa5-ab88-407c-b11d-5637fb6bcb50", 00:11:18.735 "strip_size_kb": 64, 00:11:18.735 "state": "online", 00:11:18.735 "raid_level": "concat", 00:11:18.735 "superblock": true, 00:11:18.735 "num_base_bdevs": 3, 00:11:18.735 "num_base_bdevs_discovered": 3, 00:11:18.735 "num_base_bdevs_operational": 3, 00:11:18.735 "base_bdevs_list": [ 00:11:18.735 { 00:11:18.735 "name": "BaseBdev1", 00:11:18.735 "uuid": "7a809e49-3703-547b-bea9-ac1d3a47b651", 00:11:18.735 "is_configured": true, 00:11:18.735 "data_offset": 2048, 00:11:18.735 "data_size": 63488 00:11:18.735 }, 00:11:18.735 { 00:11:18.735 "name": "BaseBdev2", 00:11:18.735 "uuid": "6ecb400f-2d88-5f81-b7d3-4bb76d2057d4", 00:11:18.735 "is_configured": true, 00:11:18.735 "data_offset": 2048, 00:11:18.735 "data_size": 63488 00:11:18.735 }, 00:11:18.735 { 00:11:18.735 "name": "BaseBdev3", 00:11:18.735 "uuid": "6b44a8c2-0539-541e-903a-e1ed5a4cee5c", 00:11:18.735 "is_configured": true, 00:11:18.735 "data_offset": 2048, 00:11:18.735 "data_size": 63488 00:11:18.735 } 00:11:18.735 ] 00:11:18.735 }' 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.735 16:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.301 [2024-10-08 16:19:12.388275] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.301 [2024-10-08 16:19:12.388331] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.301 [2024-10-08 16:19:12.391711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.301 [2024-10-08 16:19:12.392027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.301 [2024-10-08 16:19:12.392102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.301 [2024-10-08 16:19:12.392119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:19.301 { 00:11:19.301 "results": [ 00:11:19.301 { 00:11:19.301 "job": "raid_bdev1", 00:11:19.301 "core_mask": "0x1", 00:11:19.301 "workload": "randrw", 00:11:19.301 "percentage": 50, 00:11:19.301 "status": "finished", 00:11:19.301 "queue_depth": 1, 00:11:19.301 "io_size": 131072, 00:11:19.301 "runtime": 1.424217, 00:11:19.301 "iops": 10746.255661882986, 00:11:19.301 "mibps": 1343.2819577353732, 00:11:19.301 "io_failed": 1, 00:11:19.301 "io_timeout": 0, 00:11:19.301 "avg_latency_us": 129.8937258116247, 00:11:19.301 "min_latency_us": 40.72727272727273, 00:11:19.301 "max_latency_us": 1951.1854545454546 00:11:19.301 } 00:11:19.301 ], 00:11:19.301 "core_count": 1 00:11:19.301 } 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67615 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67615 ']' 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67615 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67615 00:11:19.301 killing process with pid 67615 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67615' 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67615 00:11:19.301 [2024-10-08 16:19:12.430535] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.301 16:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67615 00:11:19.560 [2024-10-08 16:19:12.639930] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.32vRgfLUtX 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:20.935 00:11:20.935 real 0m4.821s 00:11:20.935 user 0m5.883s 00:11:20.935 sys 0m0.603s 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.935 16:19:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.935 ************************************ 00:11:20.935 END TEST raid_write_error_test 00:11:20.935 ************************************ 00:11:20.935 16:19:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:20.935 16:19:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:20.935 16:19:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:20.935 16:19:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.935 16:19:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.935 ************************************ 00:11:20.935 START TEST raid_state_function_test 00:11:20.935 ************************************ 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:20.935 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67768 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67768' 00:11:20.936 Process raid pid: 67768 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67768 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67768 ']' 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.936 16:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.936 [2024-10-08 16:19:14.130714] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:20.936 [2024-10-08 16:19:14.130944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.193 [2024-10-08 16:19:14.312955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.451 [2024-10-08 16:19:14.557444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.451 [2024-10-08 16:19:14.765471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.451 [2024-10-08 16:19:14.765535] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.018 [2024-10-08 16:19:15.118192] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.018 [2024-10-08 16:19:15.118300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.018 [2024-10-08 16:19:15.118318] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.018 [2024-10-08 16:19:15.118338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.018 [2024-10-08 16:19:15.118349] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.018 [2024-10-08 16:19:15.118363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.018 "name": "Existed_Raid", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "strip_size_kb": 0, 00:11:22.018 "state": "configuring", 00:11:22.018 "raid_level": "raid1", 00:11:22.018 "superblock": false, 00:11:22.018 "num_base_bdevs": 3, 00:11:22.018 "num_base_bdevs_discovered": 0, 00:11:22.018 "num_base_bdevs_operational": 3, 00:11:22.018 "base_bdevs_list": [ 00:11:22.018 { 00:11:22.018 "name": "BaseBdev1", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 }, 00:11:22.018 { 00:11:22.018 "name": "BaseBdev2", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 }, 00:11:22.018 { 00:11:22.018 "name": "BaseBdev3", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 } 00:11:22.018 ] 00:11:22.018 }' 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.018 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 [2024-10-08 16:19:15.642223] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.585 [2024-10-08 16:19:15.642547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 [2024-10-08 16:19:15.650186] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.585 [2024-10-08 16:19:15.650251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.585 [2024-10-08 16:19:15.650267] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.585 [2024-10-08 16:19:15.650283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.585 [2024-10-08 16:19:15.650293] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.585 [2024-10-08 16:19:15.650307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 [2024-10-08 16:19:15.705150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.585 BaseBdev1 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 [ 00:11:22.585 { 00:11:22.585 "name": "BaseBdev1", 00:11:22.585 "aliases": [ 00:11:22.585 "4eb0f631-4470-49d5-9502-1cc51eea5df9" 00:11:22.585 ], 00:11:22.585 "product_name": "Malloc disk", 00:11:22.585 "block_size": 512, 00:11:22.585 "num_blocks": 65536, 00:11:22.585 "uuid": "4eb0f631-4470-49d5-9502-1cc51eea5df9", 00:11:22.585 "assigned_rate_limits": { 00:11:22.585 "rw_ios_per_sec": 0, 00:11:22.585 "rw_mbytes_per_sec": 0, 00:11:22.585 "r_mbytes_per_sec": 0, 00:11:22.585 "w_mbytes_per_sec": 0 00:11:22.585 }, 00:11:22.585 "claimed": true, 00:11:22.585 "claim_type": "exclusive_write", 00:11:22.585 "zoned": false, 00:11:22.585 "supported_io_types": { 00:11:22.585 "read": true, 00:11:22.585 "write": true, 00:11:22.585 "unmap": true, 00:11:22.585 "flush": true, 00:11:22.585 "reset": true, 00:11:22.585 "nvme_admin": false, 00:11:22.585 "nvme_io": false, 00:11:22.585 "nvme_io_md": false, 00:11:22.585 "write_zeroes": true, 00:11:22.585 "zcopy": true, 00:11:22.585 "get_zone_info": false, 00:11:22.585 "zone_management": false, 00:11:22.585 "zone_append": false, 00:11:22.585 "compare": false, 00:11:22.585 "compare_and_write": false, 00:11:22.585 "abort": true, 00:11:22.585 "seek_hole": false, 00:11:22.585 "seek_data": false, 00:11:22.585 "copy": true, 00:11:22.585 "nvme_iov_md": false 00:11:22.585 }, 00:11:22.585 "memory_domains": [ 00:11:22.585 { 00:11:22.585 "dma_device_id": "system", 00:11:22.585 "dma_device_type": 1 00:11:22.585 }, 00:11:22.585 { 00:11:22.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.585 "dma_device_type": 2 00:11:22.585 } 00:11:22.585 ], 00:11:22.585 "driver_specific": {} 00:11:22.585 } 00:11:22.585 ] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.585 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.585 "name": "Existed_Raid", 00:11:22.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.585 "strip_size_kb": 0, 00:11:22.585 "state": "configuring", 00:11:22.585 "raid_level": "raid1", 00:11:22.585 "superblock": false, 00:11:22.585 "num_base_bdevs": 3, 00:11:22.585 "num_base_bdevs_discovered": 1, 00:11:22.585 "num_base_bdevs_operational": 3, 00:11:22.585 "base_bdevs_list": [ 00:11:22.585 { 00:11:22.585 "name": "BaseBdev1", 00:11:22.585 "uuid": "4eb0f631-4470-49d5-9502-1cc51eea5df9", 00:11:22.586 "is_configured": true, 00:11:22.586 "data_offset": 0, 00:11:22.586 "data_size": 65536 00:11:22.586 }, 00:11:22.586 { 00:11:22.586 "name": "BaseBdev2", 00:11:22.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.586 "is_configured": false, 00:11:22.586 "data_offset": 0, 00:11:22.586 "data_size": 0 00:11:22.586 }, 00:11:22.586 { 00:11:22.586 "name": "BaseBdev3", 00:11:22.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.586 "is_configured": false, 00:11:22.586 "data_offset": 0, 00:11:22.586 "data_size": 0 00:11:22.586 } 00:11:22.586 ] 00:11:22.586 }' 00:11:22.586 16:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.586 16:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 [2024-10-08 16:19:16.269329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.177 [2024-10-08 16:19:16.269405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 [2024-10-08 16:19:16.277348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.177 [2024-10-08 16:19:16.280166] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.177 [2024-10-08 16:19:16.280231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.177 [2024-10-08 16:19:16.280250] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.177 [2024-10-08 16:19:16.280266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.177 "name": "Existed_Raid", 00:11:23.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.177 "strip_size_kb": 0, 00:11:23.177 "state": "configuring", 00:11:23.177 "raid_level": "raid1", 00:11:23.177 "superblock": false, 00:11:23.177 "num_base_bdevs": 3, 00:11:23.177 "num_base_bdevs_discovered": 1, 00:11:23.177 "num_base_bdevs_operational": 3, 00:11:23.177 "base_bdevs_list": [ 00:11:23.177 { 00:11:23.177 "name": "BaseBdev1", 00:11:23.177 "uuid": "4eb0f631-4470-49d5-9502-1cc51eea5df9", 00:11:23.177 "is_configured": true, 00:11:23.177 "data_offset": 0, 00:11:23.177 "data_size": 65536 00:11:23.177 }, 00:11:23.177 { 00:11:23.177 "name": "BaseBdev2", 00:11:23.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.177 "is_configured": false, 00:11:23.177 "data_offset": 0, 00:11:23.177 "data_size": 0 00:11:23.177 }, 00:11:23.177 { 00:11:23.177 "name": "BaseBdev3", 00:11:23.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.177 "is_configured": false, 00:11:23.177 "data_offset": 0, 00:11:23.177 "data_size": 0 00:11:23.177 } 00:11:23.177 ] 00:11:23.177 }' 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.177 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.743 [2024-10-08 16:19:16.815782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.743 BaseBdev2 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.743 [ 00:11:23.743 { 00:11:23.743 "name": "BaseBdev2", 00:11:23.743 "aliases": [ 00:11:23.743 "ba84c4de-7c10-4d7a-af8d-822be260e18e" 00:11:23.743 ], 00:11:23.743 "product_name": "Malloc disk", 00:11:23.743 "block_size": 512, 00:11:23.743 "num_blocks": 65536, 00:11:23.743 "uuid": "ba84c4de-7c10-4d7a-af8d-822be260e18e", 00:11:23.743 "assigned_rate_limits": { 00:11:23.743 "rw_ios_per_sec": 0, 00:11:23.743 "rw_mbytes_per_sec": 0, 00:11:23.743 "r_mbytes_per_sec": 0, 00:11:23.743 "w_mbytes_per_sec": 0 00:11:23.743 }, 00:11:23.743 "claimed": true, 00:11:23.743 "claim_type": "exclusive_write", 00:11:23.743 "zoned": false, 00:11:23.743 "supported_io_types": { 00:11:23.743 "read": true, 00:11:23.743 "write": true, 00:11:23.743 "unmap": true, 00:11:23.743 "flush": true, 00:11:23.743 "reset": true, 00:11:23.743 "nvme_admin": false, 00:11:23.743 "nvme_io": false, 00:11:23.743 "nvme_io_md": false, 00:11:23.743 "write_zeroes": true, 00:11:23.743 "zcopy": true, 00:11:23.743 "get_zone_info": false, 00:11:23.743 "zone_management": false, 00:11:23.743 "zone_append": false, 00:11:23.743 "compare": false, 00:11:23.743 "compare_and_write": false, 00:11:23.743 "abort": true, 00:11:23.743 "seek_hole": false, 00:11:23.743 "seek_data": false, 00:11:23.743 "copy": true, 00:11:23.743 "nvme_iov_md": false 00:11:23.743 }, 00:11:23.743 "memory_domains": [ 00:11:23.743 { 00:11:23.743 "dma_device_id": "system", 00:11:23.743 "dma_device_type": 1 00:11:23.743 }, 00:11:23.743 { 00:11:23.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.743 "dma_device_type": 2 00:11:23.743 } 00:11:23.743 ], 00:11:23.743 "driver_specific": {} 00:11:23.743 } 00:11:23.743 ] 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.743 "name": "Existed_Raid", 00:11:23.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.743 "strip_size_kb": 0, 00:11:23.743 "state": "configuring", 00:11:23.743 "raid_level": "raid1", 00:11:23.743 "superblock": false, 00:11:23.743 "num_base_bdevs": 3, 00:11:23.743 "num_base_bdevs_discovered": 2, 00:11:23.743 "num_base_bdevs_operational": 3, 00:11:23.743 "base_bdevs_list": [ 00:11:23.743 { 00:11:23.743 "name": "BaseBdev1", 00:11:23.743 "uuid": "4eb0f631-4470-49d5-9502-1cc51eea5df9", 00:11:23.743 "is_configured": true, 00:11:23.743 "data_offset": 0, 00:11:23.743 "data_size": 65536 00:11:23.743 }, 00:11:23.743 { 00:11:23.743 "name": "BaseBdev2", 00:11:23.743 "uuid": "ba84c4de-7c10-4d7a-af8d-822be260e18e", 00:11:23.743 "is_configured": true, 00:11:23.743 "data_offset": 0, 00:11:23.743 "data_size": 65536 00:11:23.743 }, 00:11:23.743 { 00:11:23.743 "name": "BaseBdev3", 00:11:23.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.743 "is_configured": false, 00:11:23.743 "data_offset": 0, 00:11:23.743 "data_size": 0 00:11:23.743 } 00:11:23.743 ] 00:11:23.743 }' 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.743 16:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 [2024-10-08 16:19:17.382878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.309 [2024-10-08 16:19:17.382956] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.309 [2024-10-08 16:19:17.382976] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:24.309 [2024-10-08 16:19:17.383343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:24.309 [2024-10-08 16:19:17.383611] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.309 [2024-10-08 16:19:17.383630] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.309 [2024-10-08 16:19:17.383998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.309 BaseBdev3 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 [ 00:11:24.309 { 00:11:24.309 "name": "BaseBdev3", 00:11:24.309 "aliases": [ 00:11:24.309 "7914a4c3-1075-4f8e-a29f-fba68d18ad3e" 00:11:24.309 ], 00:11:24.309 "product_name": "Malloc disk", 00:11:24.309 "block_size": 512, 00:11:24.309 "num_blocks": 65536, 00:11:24.309 "uuid": "7914a4c3-1075-4f8e-a29f-fba68d18ad3e", 00:11:24.309 "assigned_rate_limits": { 00:11:24.309 "rw_ios_per_sec": 0, 00:11:24.309 "rw_mbytes_per_sec": 0, 00:11:24.309 "r_mbytes_per_sec": 0, 00:11:24.309 "w_mbytes_per_sec": 0 00:11:24.309 }, 00:11:24.309 "claimed": true, 00:11:24.309 "claim_type": "exclusive_write", 00:11:24.309 "zoned": false, 00:11:24.309 "supported_io_types": { 00:11:24.309 "read": true, 00:11:24.309 "write": true, 00:11:24.309 "unmap": true, 00:11:24.309 "flush": true, 00:11:24.309 "reset": true, 00:11:24.309 "nvme_admin": false, 00:11:24.309 "nvme_io": false, 00:11:24.309 "nvme_io_md": false, 00:11:24.309 "write_zeroes": true, 00:11:24.309 "zcopy": true, 00:11:24.309 "get_zone_info": false, 00:11:24.309 "zone_management": false, 00:11:24.309 "zone_append": false, 00:11:24.309 "compare": false, 00:11:24.309 "compare_and_write": false, 00:11:24.309 "abort": true, 00:11:24.309 "seek_hole": false, 00:11:24.309 "seek_data": false, 00:11:24.309 "copy": true, 00:11:24.309 "nvme_iov_md": false 00:11:24.309 }, 00:11:24.309 "memory_domains": [ 00:11:24.309 { 00:11:24.309 "dma_device_id": "system", 00:11:24.309 "dma_device_type": 1 00:11:24.309 }, 00:11:24.309 { 00:11:24.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.309 "dma_device_type": 2 00:11:24.309 } 00:11:24.309 ], 00:11:24.309 "driver_specific": {} 00:11:24.309 } 00:11:24.309 ] 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.309 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.309 "name": "Existed_Raid", 00:11:24.309 "uuid": "56c516f7-081a-49e6-92ed-15d8181fea60", 00:11:24.309 "strip_size_kb": 0, 00:11:24.309 "state": "online", 00:11:24.309 "raid_level": "raid1", 00:11:24.309 "superblock": false, 00:11:24.309 "num_base_bdevs": 3, 00:11:24.309 "num_base_bdevs_discovered": 3, 00:11:24.309 "num_base_bdevs_operational": 3, 00:11:24.309 "base_bdevs_list": [ 00:11:24.309 { 00:11:24.309 "name": "BaseBdev1", 00:11:24.309 "uuid": "4eb0f631-4470-49d5-9502-1cc51eea5df9", 00:11:24.309 "is_configured": true, 00:11:24.309 "data_offset": 0, 00:11:24.309 "data_size": 65536 00:11:24.309 }, 00:11:24.309 { 00:11:24.309 "name": "BaseBdev2", 00:11:24.309 "uuid": "ba84c4de-7c10-4d7a-af8d-822be260e18e", 00:11:24.309 "is_configured": true, 00:11:24.309 "data_offset": 0, 00:11:24.309 "data_size": 65536 00:11:24.309 }, 00:11:24.309 { 00:11:24.309 "name": "BaseBdev3", 00:11:24.310 "uuid": "7914a4c3-1075-4f8e-a29f-fba68d18ad3e", 00:11:24.310 "is_configured": true, 00:11:24.310 "data_offset": 0, 00:11:24.310 "data_size": 65536 00:11:24.310 } 00:11:24.310 ] 00:11:24.310 }' 00:11:24.310 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.310 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.874 [2024-10-08 16:19:17.951585] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.874 "name": "Existed_Raid", 00:11:24.874 "aliases": [ 00:11:24.874 "56c516f7-081a-49e6-92ed-15d8181fea60" 00:11:24.874 ], 00:11:24.874 "product_name": "Raid Volume", 00:11:24.874 "block_size": 512, 00:11:24.874 "num_blocks": 65536, 00:11:24.874 "uuid": "56c516f7-081a-49e6-92ed-15d8181fea60", 00:11:24.874 "assigned_rate_limits": { 00:11:24.874 "rw_ios_per_sec": 0, 00:11:24.874 "rw_mbytes_per_sec": 0, 00:11:24.874 "r_mbytes_per_sec": 0, 00:11:24.874 "w_mbytes_per_sec": 0 00:11:24.874 }, 00:11:24.874 "claimed": false, 00:11:24.874 "zoned": false, 00:11:24.874 "supported_io_types": { 00:11:24.874 "read": true, 00:11:24.874 "write": true, 00:11:24.874 "unmap": false, 00:11:24.874 "flush": false, 00:11:24.874 "reset": true, 00:11:24.874 "nvme_admin": false, 00:11:24.874 "nvme_io": false, 00:11:24.874 "nvme_io_md": false, 00:11:24.874 "write_zeroes": true, 00:11:24.874 "zcopy": false, 00:11:24.874 "get_zone_info": false, 00:11:24.874 "zone_management": false, 00:11:24.874 "zone_append": false, 00:11:24.874 "compare": false, 00:11:24.874 "compare_and_write": false, 00:11:24.874 "abort": false, 00:11:24.874 "seek_hole": false, 00:11:24.874 "seek_data": false, 00:11:24.874 "copy": false, 00:11:24.874 "nvme_iov_md": false 00:11:24.874 }, 00:11:24.874 "memory_domains": [ 00:11:24.874 { 00:11:24.874 "dma_device_id": "system", 00:11:24.874 "dma_device_type": 1 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.874 "dma_device_type": 2 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "dma_device_id": "system", 00:11:24.874 "dma_device_type": 1 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.874 "dma_device_type": 2 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "dma_device_id": "system", 00:11:24.874 "dma_device_type": 1 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.874 "dma_device_type": 2 00:11:24.874 } 00:11:24.874 ], 00:11:24.874 "driver_specific": { 00:11:24.874 "raid": { 00:11:24.874 "uuid": "56c516f7-081a-49e6-92ed-15d8181fea60", 00:11:24.874 "strip_size_kb": 0, 00:11:24.874 "state": "online", 00:11:24.874 "raid_level": "raid1", 00:11:24.874 "superblock": false, 00:11:24.874 "num_base_bdevs": 3, 00:11:24.874 "num_base_bdevs_discovered": 3, 00:11:24.874 "num_base_bdevs_operational": 3, 00:11:24.874 "base_bdevs_list": [ 00:11:24.874 { 00:11:24.874 "name": "BaseBdev1", 00:11:24.874 "uuid": "4eb0f631-4470-49d5-9502-1cc51eea5df9", 00:11:24.874 "is_configured": true, 00:11:24.874 "data_offset": 0, 00:11:24.874 "data_size": 65536 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "name": "BaseBdev2", 00:11:24.874 "uuid": "ba84c4de-7c10-4d7a-af8d-822be260e18e", 00:11:24.874 "is_configured": true, 00:11:24.874 "data_offset": 0, 00:11:24.874 "data_size": 65536 00:11:24.874 }, 00:11:24.874 { 00:11:24.874 "name": "BaseBdev3", 00:11:24.874 "uuid": "7914a4c3-1075-4f8e-a29f-fba68d18ad3e", 00:11:24.874 "is_configured": true, 00:11:24.874 "data_offset": 0, 00:11:24.874 "data_size": 65536 00:11:24.874 } 00:11:24.874 ] 00:11:24.874 } 00:11:24.874 } 00:11:24.874 }' 00:11:24.874 16:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.874 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.874 BaseBdev2 00:11:24.874 BaseBdev3' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.875 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 [2024-10-08 16:19:18.231250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.177 "name": "Existed_Raid", 00:11:25.177 "uuid": "56c516f7-081a-49e6-92ed-15d8181fea60", 00:11:25.177 "strip_size_kb": 0, 00:11:25.177 "state": "online", 00:11:25.177 "raid_level": "raid1", 00:11:25.177 "superblock": false, 00:11:25.177 "num_base_bdevs": 3, 00:11:25.177 "num_base_bdevs_discovered": 2, 00:11:25.177 "num_base_bdevs_operational": 2, 00:11:25.177 "base_bdevs_list": [ 00:11:25.177 { 00:11:25.177 "name": null, 00:11:25.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.177 "is_configured": false, 00:11:25.177 "data_offset": 0, 00:11:25.177 "data_size": 65536 00:11:25.177 }, 00:11:25.177 { 00:11:25.177 "name": "BaseBdev2", 00:11:25.177 "uuid": "ba84c4de-7c10-4d7a-af8d-822be260e18e", 00:11:25.177 "is_configured": true, 00:11:25.177 "data_offset": 0, 00:11:25.177 "data_size": 65536 00:11:25.177 }, 00:11:25.177 { 00:11:25.177 "name": "BaseBdev3", 00:11:25.177 "uuid": "7914a4c3-1075-4f8e-a29f-fba68d18ad3e", 00:11:25.177 "is_configured": true, 00:11:25.177 "data_offset": 0, 00:11:25.177 "data_size": 65536 00:11:25.177 } 00:11:25.177 ] 00:11:25.177 }' 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.177 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.741 16:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.741 [2024-10-08 16:19:18.928059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.741 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.999 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.999 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.999 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 [2024-10-08 16:19:19.089989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.000 [2024-10-08 16:19:19.090138] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.000 [2024-10-08 16:19:19.183710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.000 [2024-10-08 16:19:19.184001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.000 [2024-10-08 16:19:19.184153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 BaseBdev2 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.000 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 [ 00:11:26.000 { 00:11:26.000 "name": "BaseBdev2", 00:11:26.000 "aliases": [ 00:11:26.000 "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c" 00:11:26.000 ], 00:11:26.000 "product_name": "Malloc disk", 00:11:26.000 "block_size": 512, 00:11:26.000 "num_blocks": 65536, 00:11:26.000 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:26.000 "assigned_rate_limits": { 00:11:26.000 "rw_ios_per_sec": 0, 00:11:26.000 "rw_mbytes_per_sec": 0, 00:11:26.000 "r_mbytes_per_sec": 0, 00:11:26.000 "w_mbytes_per_sec": 0 00:11:26.000 }, 00:11:26.000 "claimed": false, 00:11:26.000 "zoned": false, 00:11:26.000 "supported_io_types": { 00:11:26.000 "read": true, 00:11:26.000 "write": true, 00:11:26.000 "unmap": true, 00:11:26.000 "flush": true, 00:11:26.000 "reset": true, 00:11:26.000 "nvme_admin": false, 00:11:26.000 "nvme_io": false, 00:11:26.000 "nvme_io_md": false, 00:11:26.000 "write_zeroes": true, 00:11:26.000 "zcopy": true, 00:11:26.258 "get_zone_info": false, 00:11:26.258 "zone_management": false, 00:11:26.258 "zone_append": false, 00:11:26.258 "compare": false, 00:11:26.258 "compare_and_write": false, 00:11:26.258 "abort": true, 00:11:26.258 "seek_hole": false, 00:11:26.258 "seek_data": false, 00:11:26.258 "copy": true, 00:11:26.258 "nvme_iov_md": false 00:11:26.258 }, 00:11:26.258 "memory_domains": [ 00:11:26.258 { 00:11:26.258 "dma_device_id": "system", 00:11:26.258 "dma_device_type": 1 00:11:26.258 }, 00:11:26.258 { 00:11:26.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.258 "dma_device_type": 2 00:11:26.258 } 00:11:26.258 ], 00:11:26.258 "driver_specific": {} 00:11:26.258 } 00:11:26.258 ] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.258 BaseBdev3 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.258 [ 00:11:26.258 { 00:11:26.258 "name": "BaseBdev3", 00:11:26.258 "aliases": [ 00:11:26.258 "a632b68a-1016-4d16-972c-22296d46758d" 00:11:26.258 ], 00:11:26.258 "product_name": "Malloc disk", 00:11:26.258 "block_size": 512, 00:11:26.258 "num_blocks": 65536, 00:11:26.258 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:26.258 "assigned_rate_limits": { 00:11:26.258 "rw_ios_per_sec": 0, 00:11:26.258 "rw_mbytes_per_sec": 0, 00:11:26.258 "r_mbytes_per_sec": 0, 00:11:26.258 "w_mbytes_per_sec": 0 00:11:26.258 }, 00:11:26.258 "claimed": false, 00:11:26.258 "zoned": false, 00:11:26.258 "supported_io_types": { 00:11:26.258 "read": true, 00:11:26.258 "write": true, 00:11:26.258 "unmap": true, 00:11:26.258 "flush": true, 00:11:26.258 "reset": true, 00:11:26.258 "nvme_admin": false, 00:11:26.258 "nvme_io": false, 00:11:26.258 "nvme_io_md": false, 00:11:26.258 "write_zeroes": true, 00:11:26.258 "zcopy": true, 00:11:26.258 "get_zone_info": false, 00:11:26.258 "zone_management": false, 00:11:26.258 "zone_append": false, 00:11:26.258 "compare": false, 00:11:26.258 "compare_and_write": false, 00:11:26.258 "abort": true, 00:11:26.258 "seek_hole": false, 00:11:26.258 "seek_data": false, 00:11:26.258 "copy": true, 00:11:26.258 "nvme_iov_md": false 00:11:26.258 }, 00:11:26.258 "memory_domains": [ 00:11:26.258 { 00:11:26.258 "dma_device_id": "system", 00:11:26.258 "dma_device_type": 1 00:11:26.258 }, 00:11:26.258 { 00:11:26.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.258 "dma_device_type": 2 00:11:26.258 } 00:11:26.258 ], 00:11:26.258 "driver_specific": {} 00:11:26.258 } 00:11:26.258 ] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.258 [2024-10-08 16:19:19.408630] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.258 [2024-10-08 16:19:19.408829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.258 [2024-10-08 16:19:19.408872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.258 [2024-10-08 16:19:19.411627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.258 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.259 "name": "Existed_Raid", 00:11:26.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.259 "strip_size_kb": 0, 00:11:26.259 "state": "configuring", 00:11:26.259 "raid_level": "raid1", 00:11:26.259 "superblock": false, 00:11:26.259 "num_base_bdevs": 3, 00:11:26.259 "num_base_bdevs_discovered": 2, 00:11:26.259 "num_base_bdevs_operational": 3, 00:11:26.259 "base_bdevs_list": [ 00:11:26.259 { 00:11:26.259 "name": "BaseBdev1", 00:11:26.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.259 "is_configured": false, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 0 00:11:26.259 }, 00:11:26.259 { 00:11:26.259 "name": "BaseBdev2", 00:11:26.259 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:26.259 "is_configured": true, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 65536 00:11:26.259 }, 00:11:26.259 { 00:11:26.259 "name": "BaseBdev3", 00:11:26.259 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:26.259 "is_configured": true, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 65536 00:11:26.259 } 00:11:26.259 ] 00:11:26.259 }' 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.259 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.824 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.824 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.824 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.824 [2024-10-08 16:19:19.940811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.824 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.824 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.824 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.825 "name": "Existed_Raid", 00:11:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.825 "strip_size_kb": 0, 00:11:26.825 "state": "configuring", 00:11:26.825 "raid_level": "raid1", 00:11:26.825 "superblock": false, 00:11:26.825 "num_base_bdevs": 3, 00:11:26.825 "num_base_bdevs_discovered": 1, 00:11:26.825 "num_base_bdevs_operational": 3, 00:11:26.825 "base_bdevs_list": [ 00:11:26.825 { 00:11:26.825 "name": "BaseBdev1", 00:11:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.825 "is_configured": false, 00:11:26.825 "data_offset": 0, 00:11:26.825 "data_size": 0 00:11:26.825 }, 00:11:26.825 { 00:11:26.825 "name": null, 00:11:26.825 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:26.825 "is_configured": false, 00:11:26.825 "data_offset": 0, 00:11:26.825 "data_size": 65536 00:11:26.825 }, 00:11:26.825 { 00:11:26.825 "name": "BaseBdev3", 00:11:26.825 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:26.825 "is_configured": true, 00:11:26.825 "data_offset": 0, 00:11:26.825 "data_size": 65536 00:11:26.825 } 00:11:26.825 ] 00:11:26.825 }' 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.825 16:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.389 [2024-10-08 16:19:20.531207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.389 BaseBdev1 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.389 [ 00:11:27.389 { 00:11:27.389 "name": "BaseBdev1", 00:11:27.389 "aliases": [ 00:11:27.389 "4ba992d7-757e-4b2c-8302-aef73a1fce91" 00:11:27.389 ], 00:11:27.389 "product_name": "Malloc disk", 00:11:27.389 "block_size": 512, 00:11:27.389 "num_blocks": 65536, 00:11:27.389 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:27.389 "assigned_rate_limits": { 00:11:27.389 "rw_ios_per_sec": 0, 00:11:27.389 "rw_mbytes_per_sec": 0, 00:11:27.389 "r_mbytes_per_sec": 0, 00:11:27.389 "w_mbytes_per_sec": 0 00:11:27.389 }, 00:11:27.389 "claimed": true, 00:11:27.389 "claim_type": "exclusive_write", 00:11:27.389 "zoned": false, 00:11:27.389 "supported_io_types": { 00:11:27.389 "read": true, 00:11:27.389 "write": true, 00:11:27.389 "unmap": true, 00:11:27.389 "flush": true, 00:11:27.389 "reset": true, 00:11:27.389 "nvme_admin": false, 00:11:27.389 "nvme_io": false, 00:11:27.389 "nvme_io_md": false, 00:11:27.389 "write_zeroes": true, 00:11:27.389 "zcopy": true, 00:11:27.389 "get_zone_info": false, 00:11:27.389 "zone_management": false, 00:11:27.389 "zone_append": false, 00:11:27.389 "compare": false, 00:11:27.389 "compare_and_write": false, 00:11:27.389 "abort": true, 00:11:27.389 "seek_hole": false, 00:11:27.389 "seek_data": false, 00:11:27.389 "copy": true, 00:11:27.389 "nvme_iov_md": false 00:11:27.389 }, 00:11:27.389 "memory_domains": [ 00:11:27.389 { 00:11:27.389 "dma_device_id": "system", 00:11:27.389 "dma_device_type": 1 00:11:27.389 }, 00:11:27.389 { 00:11:27.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.389 "dma_device_type": 2 00:11:27.389 } 00:11:27.389 ], 00:11:27.389 "driver_specific": {} 00:11:27.389 } 00:11:27.389 ] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.389 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.390 "name": "Existed_Raid", 00:11:27.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.390 "strip_size_kb": 0, 00:11:27.390 "state": "configuring", 00:11:27.390 "raid_level": "raid1", 00:11:27.390 "superblock": false, 00:11:27.390 "num_base_bdevs": 3, 00:11:27.390 "num_base_bdevs_discovered": 2, 00:11:27.390 "num_base_bdevs_operational": 3, 00:11:27.390 "base_bdevs_list": [ 00:11:27.390 { 00:11:27.390 "name": "BaseBdev1", 00:11:27.390 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:27.390 "is_configured": true, 00:11:27.390 "data_offset": 0, 00:11:27.390 "data_size": 65536 00:11:27.390 }, 00:11:27.390 { 00:11:27.390 "name": null, 00:11:27.390 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:27.390 "is_configured": false, 00:11:27.390 "data_offset": 0, 00:11:27.390 "data_size": 65536 00:11:27.390 }, 00:11:27.390 { 00:11:27.390 "name": "BaseBdev3", 00:11:27.390 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:27.390 "is_configured": true, 00:11:27.390 "data_offset": 0, 00:11:27.390 "data_size": 65536 00:11:27.390 } 00:11:27.390 ] 00:11:27.390 }' 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.390 16:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 [2024-10-08 16:19:21.139469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.955 "name": "Existed_Raid", 00:11:27.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.955 "strip_size_kb": 0, 00:11:27.955 "state": "configuring", 00:11:27.955 "raid_level": "raid1", 00:11:27.955 "superblock": false, 00:11:27.955 "num_base_bdevs": 3, 00:11:27.955 "num_base_bdevs_discovered": 1, 00:11:27.955 "num_base_bdevs_operational": 3, 00:11:27.955 "base_bdevs_list": [ 00:11:27.955 { 00:11:27.955 "name": "BaseBdev1", 00:11:27.955 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:27.955 "is_configured": true, 00:11:27.955 "data_offset": 0, 00:11:27.955 "data_size": 65536 00:11:27.955 }, 00:11:27.955 { 00:11:27.955 "name": null, 00:11:27.955 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:27.955 "is_configured": false, 00:11:27.955 "data_offset": 0, 00:11:27.955 "data_size": 65536 00:11:27.955 }, 00:11:27.955 { 00:11:27.955 "name": null, 00:11:27.955 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:27.955 "is_configured": false, 00:11:27.955 "data_offset": 0, 00:11:27.955 "data_size": 65536 00:11:27.955 } 00:11:27.955 ] 00:11:27.955 }' 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.955 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.520 [2024-10-08 16:19:21.695657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.520 "name": "Existed_Raid", 00:11:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.520 "strip_size_kb": 0, 00:11:28.520 "state": "configuring", 00:11:28.520 "raid_level": "raid1", 00:11:28.520 "superblock": false, 00:11:28.520 "num_base_bdevs": 3, 00:11:28.520 "num_base_bdevs_discovered": 2, 00:11:28.520 "num_base_bdevs_operational": 3, 00:11:28.520 "base_bdevs_list": [ 00:11:28.520 { 00:11:28.520 "name": "BaseBdev1", 00:11:28.520 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:28.520 "is_configured": true, 00:11:28.520 "data_offset": 0, 00:11:28.520 "data_size": 65536 00:11:28.520 }, 00:11:28.520 { 00:11:28.520 "name": null, 00:11:28.520 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:28.520 "is_configured": false, 00:11:28.520 "data_offset": 0, 00:11:28.520 "data_size": 65536 00:11:28.520 }, 00:11:28.520 { 00:11:28.520 "name": "BaseBdev3", 00:11:28.520 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:28.520 "is_configured": true, 00:11:28.520 "data_offset": 0, 00:11:28.520 "data_size": 65536 00:11:28.520 } 00:11:28.520 ] 00:11:28.520 }' 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.520 16:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:29.086 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.087 [2024-10-08 16:19:22.231896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.087 "name": "Existed_Raid", 00:11:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.087 "strip_size_kb": 0, 00:11:29.087 "state": "configuring", 00:11:29.087 "raid_level": "raid1", 00:11:29.087 "superblock": false, 00:11:29.087 "num_base_bdevs": 3, 00:11:29.087 "num_base_bdevs_discovered": 1, 00:11:29.087 "num_base_bdevs_operational": 3, 00:11:29.087 "base_bdevs_list": [ 00:11:29.087 { 00:11:29.087 "name": null, 00:11:29.087 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:29.087 "is_configured": false, 00:11:29.087 "data_offset": 0, 00:11:29.087 "data_size": 65536 00:11:29.087 }, 00:11:29.087 { 00:11:29.087 "name": null, 00:11:29.087 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:29.087 "is_configured": false, 00:11:29.087 "data_offset": 0, 00:11:29.087 "data_size": 65536 00:11:29.087 }, 00:11:29.087 { 00:11:29.087 "name": "BaseBdev3", 00:11:29.087 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:29.087 "is_configured": true, 00:11:29.087 "data_offset": 0, 00:11:29.087 "data_size": 65536 00:11:29.087 } 00:11:29.087 ] 00:11:29.087 }' 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.087 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.653 [2024-10-08 16:19:22.864299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.653 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.653 "name": "Existed_Raid", 00:11:29.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.653 "strip_size_kb": 0, 00:11:29.653 "state": "configuring", 00:11:29.653 "raid_level": "raid1", 00:11:29.653 "superblock": false, 00:11:29.653 "num_base_bdevs": 3, 00:11:29.653 "num_base_bdevs_discovered": 2, 00:11:29.654 "num_base_bdevs_operational": 3, 00:11:29.654 "base_bdevs_list": [ 00:11:29.654 { 00:11:29.654 "name": null, 00:11:29.654 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:29.654 "is_configured": false, 00:11:29.654 "data_offset": 0, 00:11:29.654 "data_size": 65536 00:11:29.654 }, 00:11:29.654 { 00:11:29.654 "name": "BaseBdev2", 00:11:29.654 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:29.654 "is_configured": true, 00:11:29.654 "data_offset": 0, 00:11:29.654 "data_size": 65536 00:11:29.654 }, 00:11:29.654 { 00:11:29.654 "name": "BaseBdev3", 00:11:29.654 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:29.654 "is_configured": true, 00:11:29.654 "data_offset": 0, 00:11:29.654 "data_size": 65536 00:11:29.654 } 00:11:29.654 ] 00:11:29.654 }' 00:11:29.654 16:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.654 16:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ba992d7-757e-4b2c-8302-aef73a1fce91 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 [2024-10-08 16:19:23.511385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.220 [2024-10-08 16:19:23.511481] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.220 [2024-10-08 16:19:23.511494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.220 [2024-10-08 16:19:23.511873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:30.220 [2024-10-08 16:19:23.512092] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.220 [2024-10-08 16:19:23.512115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.220 [2024-10-08 16:19:23.512423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.220 NewBaseBdev 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.220 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 [ 00:11:30.220 { 00:11:30.220 "name": "NewBaseBdev", 00:11:30.220 "aliases": [ 00:11:30.220 "4ba992d7-757e-4b2c-8302-aef73a1fce91" 00:11:30.220 ], 00:11:30.220 "product_name": "Malloc disk", 00:11:30.220 "block_size": 512, 00:11:30.220 "num_blocks": 65536, 00:11:30.220 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:30.220 "assigned_rate_limits": { 00:11:30.220 "rw_ios_per_sec": 0, 00:11:30.220 "rw_mbytes_per_sec": 0, 00:11:30.220 "r_mbytes_per_sec": 0, 00:11:30.220 "w_mbytes_per_sec": 0 00:11:30.220 }, 00:11:30.220 "claimed": true, 00:11:30.220 "claim_type": "exclusive_write", 00:11:30.220 "zoned": false, 00:11:30.220 "supported_io_types": { 00:11:30.220 "read": true, 00:11:30.220 "write": true, 00:11:30.220 "unmap": true, 00:11:30.220 "flush": true, 00:11:30.220 "reset": true, 00:11:30.220 "nvme_admin": false, 00:11:30.220 "nvme_io": false, 00:11:30.220 "nvme_io_md": false, 00:11:30.220 "write_zeroes": true, 00:11:30.220 "zcopy": true, 00:11:30.220 "get_zone_info": false, 00:11:30.220 "zone_management": false, 00:11:30.220 "zone_append": false, 00:11:30.220 "compare": false, 00:11:30.220 "compare_and_write": false, 00:11:30.220 "abort": true, 00:11:30.220 "seek_hole": false, 00:11:30.220 "seek_data": false, 00:11:30.220 "copy": true, 00:11:30.220 "nvme_iov_md": false 00:11:30.220 }, 00:11:30.220 "memory_domains": [ 00:11:30.220 { 00:11:30.220 "dma_device_id": "system", 00:11:30.478 "dma_device_type": 1 00:11:30.478 }, 00:11:30.478 { 00:11:30.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.478 "dma_device_type": 2 00:11:30.478 } 00:11:30.478 ], 00:11:30.478 "driver_specific": {} 00:11:30.478 } 00:11:30.478 ] 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.478 "name": "Existed_Raid", 00:11:30.478 "uuid": "4f404ba0-9639-4366-a6c8-d3c4a935a7ee", 00:11:30.478 "strip_size_kb": 0, 00:11:30.478 "state": "online", 00:11:30.478 "raid_level": "raid1", 00:11:30.478 "superblock": false, 00:11:30.478 "num_base_bdevs": 3, 00:11:30.478 "num_base_bdevs_discovered": 3, 00:11:30.478 "num_base_bdevs_operational": 3, 00:11:30.478 "base_bdevs_list": [ 00:11:30.478 { 00:11:30.478 "name": "NewBaseBdev", 00:11:30.478 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:30.478 "is_configured": true, 00:11:30.478 "data_offset": 0, 00:11:30.478 "data_size": 65536 00:11:30.478 }, 00:11:30.478 { 00:11:30.478 "name": "BaseBdev2", 00:11:30.478 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:30.478 "is_configured": true, 00:11:30.478 "data_offset": 0, 00:11:30.478 "data_size": 65536 00:11:30.478 }, 00:11:30.478 { 00:11:30.478 "name": "BaseBdev3", 00:11:30.478 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:30.478 "is_configured": true, 00:11:30.478 "data_offset": 0, 00:11:30.478 "data_size": 65536 00:11:30.478 } 00:11:30.478 ] 00:11:30.478 }' 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.478 16:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.043 [2024-10-08 16:19:24.076005] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:31.043 "name": "Existed_Raid", 00:11:31.043 "aliases": [ 00:11:31.043 "4f404ba0-9639-4366-a6c8-d3c4a935a7ee" 00:11:31.043 ], 00:11:31.043 "product_name": "Raid Volume", 00:11:31.043 "block_size": 512, 00:11:31.043 "num_blocks": 65536, 00:11:31.043 "uuid": "4f404ba0-9639-4366-a6c8-d3c4a935a7ee", 00:11:31.043 "assigned_rate_limits": { 00:11:31.043 "rw_ios_per_sec": 0, 00:11:31.043 "rw_mbytes_per_sec": 0, 00:11:31.043 "r_mbytes_per_sec": 0, 00:11:31.043 "w_mbytes_per_sec": 0 00:11:31.043 }, 00:11:31.043 "claimed": false, 00:11:31.043 "zoned": false, 00:11:31.043 "supported_io_types": { 00:11:31.043 "read": true, 00:11:31.043 "write": true, 00:11:31.043 "unmap": false, 00:11:31.043 "flush": false, 00:11:31.043 "reset": true, 00:11:31.043 "nvme_admin": false, 00:11:31.043 "nvme_io": false, 00:11:31.043 "nvme_io_md": false, 00:11:31.043 "write_zeroes": true, 00:11:31.043 "zcopy": false, 00:11:31.043 "get_zone_info": false, 00:11:31.043 "zone_management": false, 00:11:31.043 "zone_append": false, 00:11:31.043 "compare": false, 00:11:31.043 "compare_and_write": false, 00:11:31.043 "abort": false, 00:11:31.043 "seek_hole": false, 00:11:31.043 "seek_data": false, 00:11:31.043 "copy": false, 00:11:31.043 "nvme_iov_md": false 00:11:31.043 }, 00:11:31.043 "memory_domains": [ 00:11:31.043 { 00:11:31.043 "dma_device_id": "system", 00:11:31.043 "dma_device_type": 1 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.043 "dma_device_type": 2 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "dma_device_id": "system", 00:11:31.043 "dma_device_type": 1 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.043 "dma_device_type": 2 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "dma_device_id": "system", 00:11:31.043 "dma_device_type": 1 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.043 "dma_device_type": 2 00:11:31.043 } 00:11:31.043 ], 00:11:31.043 "driver_specific": { 00:11:31.043 "raid": { 00:11:31.043 "uuid": "4f404ba0-9639-4366-a6c8-d3c4a935a7ee", 00:11:31.043 "strip_size_kb": 0, 00:11:31.043 "state": "online", 00:11:31.043 "raid_level": "raid1", 00:11:31.043 "superblock": false, 00:11:31.043 "num_base_bdevs": 3, 00:11:31.043 "num_base_bdevs_discovered": 3, 00:11:31.043 "num_base_bdevs_operational": 3, 00:11:31.043 "base_bdevs_list": [ 00:11:31.043 { 00:11:31.043 "name": "NewBaseBdev", 00:11:31.043 "uuid": "4ba992d7-757e-4b2c-8302-aef73a1fce91", 00:11:31.043 "is_configured": true, 00:11:31.043 "data_offset": 0, 00:11:31.043 "data_size": 65536 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "name": "BaseBdev2", 00:11:31.043 "uuid": "a7ed8c86-ea27-4e2a-b85f-6d4d6c2fcf3c", 00:11:31.043 "is_configured": true, 00:11:31.043 "data_offset": 0, 00:11:31.043 "data_size": 65536 00:11:31.043 }, 00:11:31.043 { 00:11:31.043 "name": "BaseBdev3", 00:11:31.043 "uuid": "a632b68a-1016-4d16-972c-22296d46758d", 00:11:31.043 "is_configured": true, 00:11:31.043 "data_offset": 0, 00:11:31.043 "data_size": 65536 00:11:31.043 } 00:11:31.043 ] 00:11:31.043 } 00:11:31.043 } 00:11:31.043 }' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:31.043 BaseBdev2 00:11:31.043 BaseBdev3' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.043 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.302 [2024-10-08 16:19:24.367668] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.302 [2024-10-08 16:19:24.367712] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.302 [2024-10-08 16:19:24.367809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.302 [2024-10-08 16:19:24.368185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.302 [2024-10-08 16:19:24.368202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67768 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67768 ']' 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67768 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67768 00:11:31.302 killing process with pid 67768 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67768' 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67768 00:11:31.302 [2024-10-08 16:19:24.410293] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.302 16:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67768 00:11:31.559 [2024-10-08 16:19:24.683826] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.935 16:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.935 ************************************ 00:11:32.935 END TEST raid_state_function_test 00:11:32.935 ************************************ 00:11:32.935 00:11:32.935 real 0m12.008s 00:11:32.935 user 0m19.581s 00:11:32.935 sys 0m1.678s 00:11:32.935 16:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.935 16:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.935 16:19:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:32.935 16:19:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:32.935 16:19:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.935 16:19:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.935 ************************************ 00:11:32.935 START TEST raid_state_function_test_sb 00:11:32.935 ************************************ 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:32.935 Process raid pid: 68407 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68407 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68407' 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68407 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68407 ']' 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.935 16:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.935 [2024-10-08 16:19:26.163459] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:32.935 [2024-10-08 16:19:26.163663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.193 [2024-10-08 16:19:26.334548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.450 [2024-10-08 16:19:26.630633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.708 [2024-10-08 16:19:26.860840] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.708 [2024-10-08 16:19:26.860908] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.967 [2024-10-08 16:19:27.199748] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.967 [2024-10-08 16:19:27.199823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.967 [2024-10-08 16:19:27.199842] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.967 [2024-10-08 16:19:27.199862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.967 [2024-10-08 16:19:27.199873] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.967 [2024-10-08 16:19:27.199888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.967 "name": "Existed_Raid", 00:11:33.967 "uuid": "0bfada11-eb0a-4c48-b327-a71dc5c0501a", 00:11:33.967 "strip_size_kb": 0, 00:11:33.967 "state": "configuring", 00:11:33.967 "raid_level": "raid1", 00:11:33.967 "superblock": true, 00:11:33.967 "num_base_bdevs": 3, 00:11:33.967 "num_base_bdevs_discovered": 0, 00:11:33.967 "num_base_bdevs_operational": 3, 00:11:33.967 "base_bdevs_list": [ 00:11:33.967 { 00:11:33.967 "name": "BaseBdev1", 00:11:33.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.967 "is_configured": false, 00:11:33.967 "data_offset": 0, 00:11:33.967 "data_size": 0 00:11:33.967 }, 00:11:33.967 { 00:11:33.967 "name": "BaseBdev2", 00:11:33.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.967 "is_configured": false, 00:11:33.967 "data_offset": 0, 00:11:33.967 "data_size": 0 00:11:33.967 }, 00:11:33.967 { 00:11:33.967 "name": "BaseBdev3", 00:11:33.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.967 "is_configured": false, 00:11:33.967 "data_offset": 0, 00:11:33.967 "data_size": 0 00:11:33.967 } 00:11:33.967 ] 00:11:33.967 }' 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.967 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.532 [2024-10-08 16:19:27.708310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.532 [2024-10-08 16:19:27.708370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.532 [2024-10-08 16:19:27.716335] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.532 [2024-10-08 16:19:27.716407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.532 [2024-10-08 16:19:27.716425] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.532 [2024-10-08 16:19:27.716442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.532 [2024-10-08 16:19:27.716453] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.532 [2024-10-08 16:19:27.716477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.532 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.532 [2024-10-08 16:19:27.781623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.532 BaseBdev1 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.533 [ 00:11:34.533 { 00:11:34.533 "name": "BaseBdev1", 00:11:34.533 "aliases": [ 00:11:34.533 "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a" 00:11:34.533 ], 00:11:34.533 "product_name": "Malloc disk", 00:11:34.533 "block_size": 512, 00:11:34.533 "num_blocks": 65536, 00:11:34.533 "uuid": "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a", 00:11:34.533 "assigned_rate_limits": { 00:11:34.533 "rw_ios_per_sec": 0, 00:11:34.533 "rw_mbytes_per_sec": 0, 00:11:34.533 "r_mbytes_per_sec": 0, 00:11:34.533 "w_mbytes_per_sec": 0 00:11:34.533 }, 00:11:34.533 "claimed": true, 00:11:34.533 "claim_type": "exclusive_write", 00:11:34.533 "zoned": false, 00:11:34.533 "supported_io_types": { 00:11:34.533 "read": true, 00:11:34.533 "write": true, 00:11:34.533 "unmap": true, 00:11:34.533 "flush": true, 00:11:34.533 "reset": true, 00:11:34.533 "nvme_admin": false, 00:11:34.533 "nvme_io": false, 00:11:34.533 "nvme_io_md": false, 00:11:34.533 "write_zeroes": true, 00:11:34.533 "zcopy": true, 00:11:34.533 "get_zone_info": false, 00:11:34.533 "zone_management": false, 00:11:34.533 "zone_append": false, 00:11:34.533 "compare": false, 00:11:34.533 "compare_and_write": false, 00:11:34.533 "abort": true, 00:11:34.533 "seek_hole": false, 00:11:34.533 "seek_data": false, 00:11:34.533 "copy": true, 00:11:34.533 "nvme_iov_md": false 00:11:34.533 }, 00:11:34.533 "memory_domains": [ 00:11:34.533 { 00:11:34.533 "dma_device_id": "system", 00:11:34.533 "dma_device_type": 1 00:11:34.533 }, 00:11:34.533 { 00:11:34.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.533 "dma_device_type": 2 00:11:34.533 } 00:11:34.533 ], 00:11:34.533 "driver_specific": {} 00:11:34.533 } 00:11:34.533 ] 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.533 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.790 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.790 "name": "Existed_Raid", 00:11:34.790 "uuid": "e7fe62d9-9110-40c2-847a-b80fffe85ab0", 00:11:34.790 "strip_size_kb": 0, 00:11:34.790 "state": "configuring", 00:11:34.790 "raid_level": "raid1", 00:11:34.790 "superblock": true, 00:11:34.790 "num_base_bdevs": 3, 00:11:34.790 "num_base_bdevs_discovered": 1, 00:11:34.790 "num_base_bdevs_operational": 3, 00:11:34.790 "base_bdevs_list": [ 00:11:34.790 { 00:11:34.790 "name": "BaseBdev1", 00:11:34.790 "uuid": "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a", 00:11:34.790 "is_configured": true, 00:11:34.790 "data_offset": 2048, 00:11:34.790 "data_size": 63488 00:11:34.790 }, 00:11:34.790 { 00:11:34.790 "name": "BaseBdev2", 00:11:34.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.790 "is_configured": false, 00:11:34.790 "data_offset": 0, 00:11:34.790 "data_size": 0 00:11:34.790 }, 00:11:34.790 { 00:11:34.790 "name": "BaseBdev3", 00:11:34.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.790 "is_configured": false, 00:11:34.790 "data_offset": 0, 00:11:34.790 "data_size": 0 00:11:34.790 } 00:11:34.790 ] 00:11:34.790 }' 00:11:34.790 16:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.790 16:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.048 [2024-10-08 16:19:28.281742] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.048 [2024-10-08 16:19:28.281820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.048 [2024-10-08 16:19:28.289762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.048 [2024-10-08 16:19:28.292398] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.048 [2024-10-08 16:19:28.292460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.048 [2024-10-08 16:19:28.292479] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.048 [2024-10-08 16:19:28.292496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.048 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.049 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.049 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.049 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.049 "name": "Existed_Raid", 00:11:35.049 "uuid": "bca44a89-208e-472f-a181-ca1c7b54a2d1", 00:11:35.049 "strip_size_kb": 0, 00:11:35.049 "state": "configuring", 00:11:35.049 "raid_level": "raid1", 00:11:35.049 "superblock": true, 00:11:35.049 "num_base_bdevs": 3, 00:11:35.049 "num_base_bdevs_discovered": 1, 00:11:35.049 "num_base_bdevs_operational": 3, 00:11:35.049 "base_bdevs_list": [ 00:11:35.049 { 00:11:35.049 "name": "BaseBdev1", 00:11:35.049 "uuid": "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a", 00:11:35.049 "is_configured": true, 00:11:35.049 "data_offset": 2048, 00:11:35.049 "data_size": 63488 00:11:35.049 }, 00:11:35.049 { 00:11:35.049 "name": "BaseBdev2", 00:11:35.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.049 "is_configured": false, 00:11:35.049 "data_offset": 0, 00:11:35.049 "data_size": 0 00:11:35.049 }, 00:11:35.049 { 00:11:35.049 "name": "BaseBdev3", 00:11:35.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.049 "is_configured": false, 00:11:35.049 "data_offset": 0, 00:11:35.049 "data_size": 0 00:11:35.049 } 00:11:35.049 ] 00:11:35.049 }' 00:11:35.049 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.049 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 [2024-10-08 16:19:28.771874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.613 BaseBdev2 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 [ 00:11:35.613 { 00:11:35.613 "name": "BaseBdev2", 00:11:35.613 "aliases": [ 00:11:35.613 "24ac41c6-36a2-406e-9290-d9f6400f7548" 00:11:35.613 ], 00:11:35.613 "product_name": "Malloc disk", 00:11:35.613 "block_size": 512, 00:11:35.613 "num_blocks": 65536, 00:11:35.613 "uuid": "24ac41c6-36a2-406e-9290-d9f6400f7548", 00:11:35.613 "assigned_rate_limits": { 00:11:35.613 "rw_ios_per_sec": 0, 00:11:35.613 "rw_mbytes_per_sec": 0, 00:11:35.613 "r_mbytes_per_sec": 0, 00:11:35.613 "w_mbytes_per_sec": 0 00:11:35.613 }, 00:11:35.613 "claimed": true, 00:11:35.613 "claim_type": "exclusive_write", 00:11:35.613 "zoned": false, 00:11:35.613 "supported_io_types": { 00:11:35.613 "read": true, 00:11:35.613 "write": true, 00:11:35.613 "unmap": true, 00:11:35.613 "flush": true, 00:11:35.613 "reset": true, 00:11:35.613 "nvme_admin": false, 00:11:35.613 "nvme_io": false, 00:11:35.613 "nvme_io_md": false, 00:11:35.613 "write_zeroes": true, 00:11:35.613 "zcopy": true, 00:11:35.613 "get_zone_info": false, 00:11:35.613 "zone_management": false, 00:11:35.613 "zone_append": false, 00:11:35.613 "compare": false, 00:11:35.613 "compare_and_write": false, 00:11:35.613 "abort": true, 00:11:35.613 "seek_hole": false, 00:11:35.613 "seek_data": false, 00:11:35.613 "copy": true, 00:11:35.613 "nvme_iov_md": false 00:11:35.613 }, 00:11:35.613 "memory_domains": [ 00:11:35.613 { 00:11:35.613 "dma_device_id": "system", 00:11:35.613 "dma_device_type": 1 00:11:35.613 }, 00:11:35.613 { 00:11:35.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.613 "dma_device_type": 2 00:11:35.613 } 00:11:35.613 ], 00:11:35.613 "driver_specific": {} 00:11:35.613 } 00:11:35.613 ] 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.613 "name": "Existed_Raid", 00:11:35.613 "uuid": "bca44a89-208e-472f-a181-ca1c7b54a2d1", 00:11:35.613 "strip_size_kb": 0, 00:11:35.613 "state": "configuring", 00:11:35.613 "raid_level": "raid1", 00:11:35.613 "superblock": true, 00:11:35.613 "num_base_bdevs": 3, 00:11:35.613 "num_base_bdevs_discovered": 2, 00:11:35.613 "num_base_bdevs_operational": 3, 00:11:35.613 "base_bdevs_list": [ 00:11:35.613 { 00:11:35.613 "name": "BaseBdev1", 00:11:35.613 "uuid": "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a", 00:11:35.613 "is_configured": true, 00:11:35.613 "data_offset": 2048, 00:11:35.613 "data_size": 63488 00:11:35.613 }, 00:11:35.614 { 00:11:35.614 "name": "BaseBdev2", 00:11:35.614 "uuid": "24ac41c6-36a2-406e-9290-d9f6400f7548", 00:11:35.614 "is_configured": true, 00:11:35.614 "data_offset": 2048, 00:11:35.614 "data_size": 63488 00:11:35.614 }, 00:11:35.614 { 00:11:35.614 "name": "BaseBdev3", 00:11:35.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.614 "is_configured": false, 00:11:35.614 "data_offset": 0, 00:11:35.614 "data_size": 0 00:11:35.614 } 00:11:35.614 ] 00:11:35.614 }' 00:11:35.614 16:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.614 16:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.179 [2024-10-08 16:19:29.318074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.179 [2024-10-08 16:19:29.318600] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.179 [2024-10-08 16:19:29.318639] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.179 BaseBdev3 00:11:36.179 [2024-10-08 16:19:29.318999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:36.179 [2024-10-08 16:19:29.319215] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.179 [2024-10-08 16:19:29.319240] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.179 [2024-10-08 16:19:29.319427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.179 [ 00:11:36.179 { 00:11:36.179 "name": "BaseBdev3", 00:11:36.179 "aliases": [ 00:11:36.179 "a3c25b67-4028-43d6-b45c-469a48a01d12" 00:11:36.179 ], 00:11:36.179 "product_name": "Malloc disk", 00:11:36.179 "block_size": 512, 00:11:36.179 "num_blocks": 65536, 00:11:36.179 "uuid": "a3c25b67-4028-43d6-b45c-469a48a01d12", 00:11:36.179 "assigned_rate_limits": { 00:11:36.179 "rw_ios_per_sec": 0, 00:11:36.179 "rw_mbytes_per_sec": 0, 00:11:36.179 "r_mbytes_per_sec": 0, 00:11:36.179 "w_mbytes_per_sec": 0 00:11:36.179 }, 00:11:36.179 "claimed": true, 00:11:36.179 "claim_type": "exclusive_write", 00:11:36.179 "zoned": false, 00:11:36.179 "supported_io_types": { 00:11:36.179 "read": true, 00:11:36.179 "write": true, 00:11:36.179 "unmap": true, 00:11:36.179 "flush": true, 00:11:36.179 "reset": true, 00:11:36.179 "nvme_admin": false, 00:11:36.179 "nvme_io": false, 00:11:36.179 "nvme_io_md": false, 00:11:36.179 "write_zeroes": true, 00:11:36.179 "zcopy": true, 00:11:36.179 "get_zone_info": false, 00:11:36.179 "zone_management": false, 00:11:36.179 "zone_append": false, 00:11:36.179 "compare": false, 00:11:36.179 "compare_and_write": false, 00:11:36.179 "abort": true, 00:11:36.179 "seek_hole": false, 00:11:36.179 "seek_data": false, 00:11:36.179 "copy": true, 00:11:36.179 "nvme_iov_md": false 00:11:36.179 }, 00:11:36.179 "memory_domains": [ 00:11:36.179 { 00:11:36.179 "dma_device_id": "system", 00:11:36.179 "dma_device_type": 1 00:11:36.179 }, 00:11:36.179 { 00:11:36.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.179 "dma_device_type": 2 00:11:36.179 } 00:11:36.179 ], 00:11:36.179 "driver_specific": {} 00:11:36.179 } 00:11:36.179 ] 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.179 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.180 "name": "Existed_Raid", 00:11:36.180 "uuid": "bca44a89-208e-472f-a181-ca1c7b54a2d1", 00:11:36.180 "strip_size_kb": 0, 00:11:36.180 "state": "online", 00:11:36.180 "raid_level": "raid1", 00:11:36.180 "superblock": true, 00:11:36.180 "num_base_bdevs": 3, 00:11:36.180 "num_base_bdevs_discovered": 3, 00:11:36.180 "num_base_bdevs_operational": 3, 00:11:36.180 "base_bdevs_list": [ 00:11:36.180 { 00:11:36.180 "name": "BaseBdev1", 00:11:36.180 "uuid": "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a", 00:11:36.180 "is_configured": true, 00:11:36.180 "data_offset": 2048, 00:11:36.180 "data_size": 63488 00:11:36.180 }, 00:11:36.180 { 00:11:36.180 "name": "BaseBdev2", 00:11:36.180 "uuid": "24ac41c6-36a2-406e-9290-d9f6400f7548", 00:11:36.180 "is_configured": true, 00:11:36.180 "data_offset": 2048, 00:11:36.180 "data_size": 63488 00:11:36.180 }, 00:11:36.180 { 00:11:36.180 "name": "BaseBdev3", 00:11:36.180 "uuid": "a3c25b67-4028-43d6-b45c-469a48a01d12", 00:11:36.180 "is_configured": true, 00:11:36.180 "data_offset": 2048, 00:11:36.180 "data_size": 63488 00:11:36.180 } 00:11:36.180 ] 00:11:36.180 }' 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.180 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.744 [2024-10-08 16:19:29.838731] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.744 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.744 "name": "Existed_Raid", 00:11:36.744 "aliases": [ 00:11:36.744 "bca44a89-208e-472f-a181-ca1c7b54a2d1" 00:11:36.744 ], 00:11:36.744 "product_name": "Raid Volume", 00:11:36.744 "block_size": 512, 00:11:36.744 "num_blocks": 63488, 00:11:36.744 "uuid": "bca44a89-208e-472f-a181-ca1c7b54a2d1", 00:11:36.744 "assigned_rate_limits": { 00:11:36.744 "rw_ios_per_sec": 0, 00:11:36.744 "rw_mbytes_per_sec": 0, 00:11:36.744 "r_mbytes_per_sec": 0, 00:11:36.744 "w_mbytes_per_sec": 0 00:11:36.744 }, 00:11:36.744 "claimed": false, 00:11:36.744 "zoned": false, 00:11:36.744 "supported_io_types": { 00:11:36.744 "read": true, 00:11:36.744 "write": true, 00:11:36.744 "unmap": false, 00:11:36.744 "flush": false, 00:11:36.744 "reset": true, 00:11:36.744 "nvme_admin": false, 00:11:36.744 "nvme_io": false, 00:11:36.744 "nvme_io_md": false, 00:11:36.744 "write_zeroes": true, 00:11:36.744 "zcopy": false, 00:11:36.744 "get_zone_info": false, 00:11:36.744 "zone_management": false, 00:11:36.744 "zone_append": false, 00:11:36.744 "compare": false, 00:11:36.744 "compare_and_write": false, 00:11:36.744 "abort": false, 00:11:36.744 "seek_hole": false, 00:11:36.744 "seek_data": false, 00:11:36.744 "copy": false, 00:11:36.744 "nvme_iov_md": false 00:11:36.744 }, 00:11:36.744 "memory_domains": [ 00:11:36.744 { 00:11:36.744 "dma_device_id": "system", 00:11:36.744 "dma_device_type": 1 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.744 "dma_device_type": 2 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "dma_device_id": "system", 00:11:36.744 "dma_device_type": 1 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.744 "dma_device_type": 2 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "dma_device_id": "system", 00:11:36.744 "dma_device_type": 1 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.744 "dma_device_type": 2 00:11:36.744 } 00:11:36.744 ], 00:11:36.744 "driver_specific": { 00:11:36.744 "raid": { 00:11:36.744 "uuid": "bca44a89-208e-472f-a181-ca1c7b54a2d1", 00:11:36.744 "strip_size_kb": 0, 00:11:36.744 "state": "online", 00:11:36.744 "raid_level": "raid1", 00:11:36.744 "superblock": true, 00:11:36.745 "num_base_bdevs": 3, 00:11:36.745 "num_base_bdevs_discovered": 3, 00:11:36.745 "num_base_bdevs_operational": 3, 00:11:36.745 "base_bdevs_list": [ 00:11:36.745 { 00:11:36.745 "name": "BaseBdev1", 00:11:36.745 "uuid": "8a49ff22-b7a9-4389-8c35-8ca3d1b15d9a", 00:11:36.745 "is_configured": true, 00:11:36.745 "data_offset": 2048, 00:11:36.745 "data_size": 63488 00:11:36.745 }, 00:11:36.745 { 00:11:36.745 "name": "BaseBdev2", 00:11:36.745 "uuid": "24ac41c6-36a2-406e-9290-d9f6400f7548", 00:11:36.745 "is_configured": true, 00:11:36.745 "data_offset": 2048, 00:11:36.745 "data_size": 63488 00:11:36.745 }, 00:11:36.745 { 00:11:36.745 "name": "BaseBdev3", 00:11:36.745 "uuid": "a3c25b67-4028-43d6-b45c-469a48a01d12", 00:11:36.745 "is_configured": true, 00:11:36.745 "data_offset": 2048, 00:11:36.745 "data_size": 63488 00:11:36.745 } 00:11:36.745 ] 00:11:36.745 } 00:11:36.745 } 00:11:36.745 }' 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.745 BaseBdev2 00:11:36.745 BaseBdev3' 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.745 16:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.745 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.002 [2024-10-08 16:19:30.154353] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.002 "name": "Existed_Raid", 00:11:37.002 "uuid": "bca44a89-208e-472f-a181-ca1c7b54a2d1", 00:11:37.002 "strip_size_kb": 0, 00:11:37.002 "state": "online", 00:11:37.002 "raid_level": "raid1", 00:11:37.002 "superblock": true, 00:11:37.002 "num_base_bdevs": 3, 00:11:37.002 "num_base_bdevs_discovered": 2, 00:11:37.002 "num_base_bdevs_operational": 2, 00:11:37.002 "base_bdevs_list": [ 00:11:37.002 { 00:11:37.002 "name": null, 00:11:37.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.002 "is_configured": false, 00:11:37.002 "data_offset": 0, 00:11:37.002 "data_size": 63488 00:11:37.002 }, 00:11:37.002 { 00:11:37.002 "name": "BaseBdev2", 00:11:37.002 "uuid": "24ac41c6-36a2-406e-9290-d9f6400f7548", 00:11:37.002 "is_configured": true, 00:11:37.002 "data_offset": 2048, 00:11:37.002 "data_size": 63488 00:11:37.002 }, 00:11:37.002 { 00:11:37.002 "name": "BaseBdev3", 00:11:37.002 "uuid": "a3c25b67-4028-43d6-b45c-469a48a01d12", 00:11:37.002 "is_configured": true, 00:11:37.002 "data_offset": 2048, 00:11:37.002 "data_size": 63488 00:11:37.002 } 00:11:37.002 ] 00:11:37.002 }' 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.002 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.566 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 [2024-10-08 16:19:30.819454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.824 16:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.824 [2024-10-08 16:19:30.964427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.824 [2024-10-08 16:19:30.964601] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.824 [2024-10-08 16:19:31.055161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.824 [2024-10-08 16:19:31.055243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.824 [2024-10-08 16:19:31.055267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.824 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.083 BaseBdev2 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.083 [ 00:11:38.083 { 00:11:38.083 "name": "BaseBdev2", 00:11:38.083 "aliases": [ 00:11:38.083 "7101b90b-b884-4d44-b419-3586dc40b374" 00:11:38.083 ], 00:11:38.083 "product_name": "Malloc disk", 00:11:38.083 "block_size": 512, 00:11:38.083 "num_blocks": 65536, 00:11:38.083 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:38.083 "assigned_rate_limits": { 00:11:38.083 "rw_ios_per_sec": 0, 00:11:38.083 "rw_mbytes_per_sec": 0, 00:11:38.083 "r_mbytes_per_sec": 0, 00:11:38.083 "w_mbytes_per_sec": 0 00:11:38.083 }, 00:11:38.083 "claimed": false, 00:11:38.083 "zoned": false, 00:11:38.083 "supported_io_types": { 00:11:38.083 "read": true, 00:11:38.083 "write": true, 00:11:38.083 "unmap": true, 00:11:38.083 "flush": true, 00:11:38.083 "reset": true, 00:11:38.083 "nvme_admin": false, 00:11:38.083 "nvme_io": false, 00:11:38.083 "nvme_io_md": false, 00:11:38.083 "write_zeroes": true, 00:11:38.083 "zcopy": true, 00:11:38.083 "get_zone_info": false, 00:11:38.083 "zone_management": false, 00:11:38.083 "zone_append": false, 00:11:38.083 "compare": false, 00:11:38.083 "compare_and_write": false, 00:11:38.083 "abort": true, 00:11:38.083 "seek_hole": false, 00:11:38.083 "seek_data": false, 00:11:38.083 "copy": true, 00:11:38.083 "nvme_iov_md": false 00:11:38.083 }, 00:11:38.083 "memory_domains": [ 00:11:38.083 { 00:11:38.083 "dma_device_id": "system", 00:11:38.083 "dma_device_type": 1 00:11:38.083 }, 00:11:38.083 { 00:11:38.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.083 "dma_device_type": 2 00:11:38.083 } 00:11:38.083 ], 00:11:38.083 "driver_specific": {} 00:11:38.083 } 00:11:38.083 ] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.083 BaseBdev3 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.083 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.083 [ 00:11:38.083 { 00:11:38.083 "name": "BaseBdev3", 00:11:38.083 "aliases": [ 00:11:38.083 "93bc6dd1-9dd6-4968-8080-5234bcfc36e1" 00:11:38.083 ], 00:11:38.083 "product_name": "Malloc disk", 00:11:38.083 "block_size": 512, 00:11:38.083 "num_blocks": 65536, 00:11:38.083 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:38.083 "assigned_rate_limits": { 00:11:38.083 "rw_ios_per_sec": 0, 00:11:38.083 "rw_mbytes_per_sec": 0, 00:11:38.083 "r_mbytes_per_sec": 0, 00:11:38.083 "w_mbytes_per_sec": 0 00:11:38.083 }, 00:11:38.083 "claimed": false, 00:11:38.083 "zoned": false, 00:11:38.083 "supported_io_types": { 00:11:38.083 "read": true, 00:11:38.083 "write": true, 00:11:38.083 "unmap": true, 00:11:38.083 "flush": true, 00:11:38.083 "reset": true, 00:11:38.083 "nvme_admin": false, 00:11:38.083 "nvme_io": false, 00:11:38.083 "nvme_io_md": false, 00:11:38.083 "write_zeroes": true, 00:11:38.083 "zcopy": true, 00:11:38.083 "get_zone_info": false, 00:11:38.083 "zone_management": false, 00:11:38.083 "zone_append": false, 00:11:38.083 "compare": false, 00:11:38.083 "compare_and_write": false, 00:11:38.083 "abort": true, 00:11:38.083 "seek_hole": false, 00:11:38.083 "seek_data": false, 00:11:38.083 "copy": true, 00:11:38.083 "nvme_iov_md": false 00:11:38.083 }, 00:11:38.083 "memory_domains": [ 00:11:38.083 { 00:11:38.083 "dma_device_id": "system", 00:11:38.083 "dma_device_type": 1 00:11:38.084 }, 00:11:38.084 { 00:11:38.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.084 "dma_device_type": 2 00:11:38.084 } 00:11:38.084 ], 00:11:38.084 "driver_specific": {} 00:11:38.084 } 00:11:38.084 ] 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.084 [2024-10-08 16:19:31.278478] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.084 [2024-10-08 16:19:31.278558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.084 [2024-10-08 16:19:31.278588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.084 [2024-10-08 16:19:31.281219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.084 "name": "Existed_Raid", 00:11:38.084 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:38.084 "strip_size_kb": 0, 00:11:38.084 "state": "configuring", 00:11:38.084 "raid_level": "raid1", 00:11:38.084 "superblock": true, 00:11:38.084 "num_base_bdevs": 3, 00:11:38.084 "num_base_bdevs_discovered": 2, 00:11:38.084 "num_base_bdevs_operational": 3, 00:11:38.084 "base_bdevs_list": [ 00:11:38.084 { 00:11:38.084 "name": "BaseBdev1", 00:11:38.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.084 "is_configured": false, 00:11:38.084 "data_offset": 0, 00:11:38.084 "data_size": 0 00:11:38.084 }, 00:11:38.084 { 00:11:38.084 "name": "BaseBdev2", 00:11:38.084 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:38.084 "is_configured": true, 00:11:38.084 "data_offset": 2048, 00:11:38.084 "data_size": 63488 00:11:38.084 }, 00:11:38.084 { 00:11:38.084 "name": "BaseBdev3", 00:11:38.084 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:38.084 "is_configured": true, 00:11:38.084 "data_offset": 2048, 00:11:38.084 "data_size": 63488 00:11:38.084 } 00:11:38.084 ] 00:11:38.084 }' 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.084 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.653 [2024-10-08 16:19:31.814662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.653 "name": "Existed_Raid", 00:11:38.653 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:38.653 "strip_size_kb": 0, 00:11:38.653 "state": "configuring", 00:11:38.653 "raid_level": "raid1", 00:11:38.653 "superblock": true, 00:11:38.653 "num_base_bdevs": 3, 00:11:38.653 "num_base_bdevs_discovered": 1, 00:11:38.653 "num_base_bdevs_operational": 3, 00:11:38.653 "base_bdevs_list": [ 00:11:38.653 { 00:11:38.653 "name": "BaseBdev1", 00:11:38.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.653 "is_configured": false, 00:11:38.653 "data_offset": 0, 00:11:38.653 "data_size": 0 00:11:38.653 }, 00:11:38.653 { 00:11:38.653 "name": null, 00:11:38.653 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:38.653 "is_configured": false, 00:11:38.653 "data_offset": 0, 00:11:38.653 "data_size": 63488 00:11:38.653 }, 00:11:38.653 { 00:11:38.653 "name": "BaseBdev3", 00:11:38.653 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:38.653 "is_configured": true, 00:11:38.653 "data_offset": 2048, 00:11:38.653 "data_size": 63488 00:11:38.653 } 00:11:38.653 ] 00:11:38.653 }' 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.653 16:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 [2024-10-08 16:19:32.467897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.219 BaseBdev1 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 [ 00:11:39.219 { 00:11:39.219 "name": "BaseBdev1", 00:11:39.219 "aliases": [ 00:11:39.219 "5267b489-73db-4fbf-95e9-867af623f535" 00:11:39.219 ], 00:11:39.219 "product_name": "Malloc disk", 00:11:39.219 "block_size": 512, 00:11:39.219 "num_blocks": 65536, 00:11:39.219 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:39.219 "assigned_rate_limits": { 00:11:39.219 "rw_ios_per_sec": 0, 00:11:39.219 "rw_mbytes_per_sec": 0, 00:11:39.219 "r_mbytes_per_sec": 0, 00:11:39.219 "w_mbytes_per_sec": 0 00:11:39.219 }, 00:11:39.219 "claimed": true, 00:11:39.219 "claim_type": "exclusive_write", 00:11:39.219 "zoned": false, 00:11:39.219 "supported_io_types": { 00:11:39.219 "read": true, 00:11:39.219 "write": true, 00:11:39.219 "unmap": true, 00:11:39.219 "flush": true, 00:11:39.219 "reset": true, 00:11:39.219 "nvme_admin": false, 00:11:39.219 "nvme_io": false, 00:11:39.219 "nvme_io_md": false, 00:11:39.219 "write_zeroes": true, 00:11:39.219 "zcopy": true, 00:11:39.219 "get_zone_info": false, 00:11:39.219 "zone_management": false, 00:11:39.219 "zone_append": false, 00:11:39.219 "compare": false, 00:11:39.219 "compare_and_write": false, 00:11:39.219 "abort": true, 00:11:39.219 "seek_hole": false, 00:11:39.219 "seek_data": false, 00:11:39.219 "copy": true, 00:11:39.219 "nvme_iov_md": false 00:11:39.219 }, 00:11:39.219 "memory_domains": [ 00:11:39.219 { 00:11:39.219 "dma_device_id": "system", 00:11:39.219 "dma_device_type": 1 00:11:39.219 }, 00:11:39.219 { 00:11:39.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.219 "dma_device_type": 2 00:11:39.219 } 00:11:39.219 ], 00:11:39.219 "driver_specific": {} 00:11:39.219 } 00:11:39.219 ] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.476 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.476 "name": "Existed_Raid", 00:11:39.476 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:39.476 "strip_size_kb": 0, 00:11:39.476 "state": "configuring", 00:11:39.476 "raid_level": "raid1", 00:11:39.476 "superblock": true, 00:11:39.476 "num_base_bdevs": 3, 00:11:39.476 "num_base_bdevs_discovered": 2, 00:11:39.476 "num_base_bdevs_operational": 3, 00:11:39.476 "base_bdevs_list": [ 00:11:39.476 { 00:11:39.476 "name": "BaseBdev1", 00:11:39.476 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:39.476 "is_configured": true, 00:11:39.476 "data_offset": 2048, 00:11:39.476 "data_size": 63488 00:11:39.476 }, 00:11:39.476 { 00:11:39.476 "name": null, 00:11:39.476 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:39.476 "is_configured": false, 00:11:39.476 "data_offset": 0, 00:11:39.476 "data_size": 63488 00:11:39.476 }, 00:11:39.476 { 00:11:39.476 "name": "BaseBdev3", 00:11:39.476 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:39.476 "is_configured": true, 00:11:39.476 "data_offset": 2048, 00:11:39.476 "data_size": 63488 00:11:39.476 } 00:11:39.476 ] 00:11:39.476 }' 00:11:39.476 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.476 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.734 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.734 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.734 16:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.734 16:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.734 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.734 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.734 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.734 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.734 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.734 [2024-10-08 16:19:33.052136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.990 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.991 "name": "Existed_Raid", 00:11:39.991 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:39.991 "strip_size_kb": 0, 00:11:39.991 "state": "configuring", 00:11:39.991 "raid_level": "raid1", 00:11:39.991 "superblock": true, 00:11:39.991 "num_base_bdevs": 3, 00:11:39.991 "num_base_bdevs_discovered": 1, 00:11:39.991 "num_base_bdevs_operational": 3, 00:11:39.991 "base_bdevs_list": [ 00:11:39.991 { 00:11:39.991 "name": "BaseBdev1", 00:11:39.991 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:39.991 "is_configured": true, 00:11:39.991 "data_offset": 2048, 00:11:39.991 "data_size": 63488 00:11:39.991 }, 00:11:39.991 { 00:11:39.991 "name": null, 00:11:39.991 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:39.991 "is_configured": false, 00:11:39.991 "data_offset": 0, 00:11:39.991 "data_size": 63488 00:11:39.991 }, 00:11:39.991 { 00:11:39.991 "name": null, 00:11:39.991 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:39.991 "is_configured": false, 00:11:39.991 "data_offset": 0, 00:11:39.991 "data_size": 63488 00:11:39.991 } 00:11:39.991 ] 00:11:39.991 }' 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.991 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 [2024-10-08 16:19:33.624247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.555 "name": "Existed_Raid", 00:11:40.555 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:40.555 "strip_size_kb": 0, 00:11:40.555 "state": "configuring", 00:11:40.555 "raid_level": "raid1", 00:11:40.555 "superblock": true, 00:11:40.555 "num_base_bdevs": 3, 00:11:40.555 "num_base_bdevs_discovered": 2, 00:11:40.555 "num_base_bdevs_operational": 3, 00:11:40.555 "base_bdevs_list": [ 00:11:40.555 { 00:11:40.555 "name": "BaseBdev1", 00:11:40.555 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:40.555 "is_configured": true, 00:11:40.555 "data_offset": 2048, 00:11:40.555 "data_size": 63488 00:11:40.555 }, 00:11:40.555 { 00:11:40.555 "name": null, 00:11:40.555 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:40.555 "is_configured": false, 00:11:40.555 "data_offset": 0, 00:11:40.555 "data_size": 63488 00:11:40.555 }, 00:11:40.555 { 00:11:40.555 "name": "BaseBdev3", 00:11:40.555 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:40.555 "is_configured": true, 00:11:40.555 "data_offset": 2048, 00:11:40.555 "data_size": 63488 00:11:40.555 } 00:11:40.555 ] 00:11:40.555 }' 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.555 16:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.812 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.812 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.812 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.812 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.812 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.087 [2024-10-08 16:19:34.144438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.087 "name": "Existed_Raid", 00:11:41.087 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:41.087 "strip_size_kb": 0, 00:11:41.087 "state": "configuring", 00:11:41.087 "raid_level": "raid1", 00:11:41.087 "superblock": true, 00:11:41.087 "num_base_bdevs": 3, 00:11:41.087 "num_base_bdevs_discovered": 1, 00:11:41.087 "num_base_bdevs_operational": 3, 00:11:41.087 "base_bdevs_list": [ 00:11:41.087 { 00:11:41.087 "name": null, 00:11:41.087 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:41.087 "is_configured": false, 00:11:41.087 "data_offset": 0, 00:11:41.087 "data_size": 63488 00:11:41.087 }, 00:11:41.087 { 00:11:41.087 "name": null, 00:11:41.087 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:41.087 "is_configured": false, 00:11:41.087 "data_offset": 0, 00:11:41.087 "data_size": 63488 00:11:41.087 }, 00:11:41.087 { 00:11:41.087 "name": "BaseBdev3", 00:11:41.087 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:41.087 "is_configured": true, 00:11:41.087 "data_offset": 2048, 00:11:41.087 "data_size": 63488 00:11:41.087 } 00:11:41.087 ] 00:11:41.087 }' 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.087 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.652 [2024-10-08 16:19:34.772476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.652 "name": "Existed_Raid", 00:11:41.652 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:41.652 "strip_size_kb": 0, 00:11:41.652 "state": "configuring", 00:11:41.652 "raid_level": "raid1", 00:11:41.652 "superblock": true, 00:11:41.652 "num_base_bdevs": 3, 00:11:41.652 "num_base_bdevs_discovered": 2, 00:11:41.652 "num_base_bdevs_operational": 3, 00:11:41.652 "base_bdevs_list": [ 00:11:41.652 { 00:11:41.652 "name": null, 00:11:41.652 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:41.652 "is_configured": false, 00:11:41.652 "data_offset": 0, 00:11:41.652 "data_size": 63488 00:11:41.652 }, 00:11:41.652 { 00:11:41.652 "name": "BaseBdev2", 00:11:41.652 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:41.652 "is_configured": true, 00:11:41.652 "data_offset": 2048, 00:11:41.652 "data_size": 63488 00:11:41.652 }, 00:11:41.652 { 00:11:41.652 "name": "BaseBdev3", 00:11:41.652 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:41.652 "is_configured": true, 00:11:41.652 "data_offset": 2048, 00:11:41.652 "data_size": 63488 00:11:41.652 } 00:11:41.652 ] 00:11:41.652 }' 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.652 16:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5267b489-73db-4fbf-95e9-867af623f535 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.216 [2024-10-08 16:19:35.381951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:42.216 [2024-10-08 16:19:35.382249] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.216 [2024-10-08 16:19:35.382268] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.216 NewBaseBdev 00:11:42.216 [2024-10-08 16:19:35.382613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:42.216 [2024-10-08 16:19:35.382824] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.216 [2024-10-08 16:19:35.382847] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:42.216 [2024-10-08 16:19:35.383016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.216 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.216 [ 00:11:42.216 { 00:11:42.216 "name": "NewBaseBdev", 00:11:42.216 "aliases": [ 00:11:42.216 "5267b489-73db-4fbf-95e9-867af623f535" 00:11:42.216 ], 00:11:42.216 "product_name": "Malloc disk", 00:11:42.216 "block_size": 512, 00:11:42.216 "num_blocks": 65536, 00:11:42.217 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:42.217 "assigned_rate_limits": { 00:11:42.217 "rw_ios_per_sec": 0, 00:11:42.217 "rw_mbytes_per_sec": 0, 00:11:42.217 "r_mbytes_per_sec": 0, 00:11:42.217 "w_mbytes_per_sec": 0 00:11:42.217 }, 00:11:42.217 "claimed": true, 00:11:42.217 "claim_type": "exclusive_write", 00:11:42.217 "zoned": false, 00:11:42.217 "supported_io_types": { 00:11:42.217 "read": true, 00:11:42.217 "write": true, 00:11:42.217 "unmap": true, 00:11:42.217 "flush": true, 00:11:42.217 "reset": true, 00:11:42.217 "nvme_admin": false, 00:11:42.217 "nvme_io": false, 00:11:42.217 "nvme_io_md": false, 00:11:42.217 "write_zeroes": true, 00:11:42.217 "zcopy": true, 00:11:42.217 "get_zone_info": false, 00:11:42.217 "zone_management": false, 00:11:42.217 "zone_append": false, 00:11:42.217 "compare": false, 00:11:42.217 "compare_and_write": false, 00:11:42.217 "abort": true, 00:11:42.217 "seek_hole": false, 00:11:42.217 "seek_data": false, 00:11:42.217 "copy": true, 00:11:42.217 "nvme_iov_md": false 00:11:42.217 }, 00:11:42.217 "memory_domains": [ 00:11:42.217 { 00:11:42.217 "dma_device_id": "system", 00:11:42.217 "dma_device_type": 1 00:11:42.217 }, 00:11:42.217 { 00:11:42.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.217 "dma_device_type": 2 00:11:42.217 } 00:11:42.217 ], 00:11:42.217 "driver_specific": {} 00:11:42.217 } 00:11:42.217 ] 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.217 "name": "Existed_Raid", 00:11:42.217 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:42.217 "strip_size_kb": 0, 00:11:42.217 "state": "online", 00:11:42.217 "raid_level": "raid1", 00:11:42.217 "superblock": true, 00:11:42.217 "num_base_bdevs": 3, 00:11:42.217 "num_base_bdevs_discovered": 3, 00:11:42.217 "num_base_bdevs_operational": 3, 00:11:42.217 "base_bdevs_list": [ 00:11:42.217 { 00:11:42.217 "name": "NewBaseBdev", 00:11:42.217 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:42.217 "is_configured": true, 00:11:42.217 "data_offset": 2048, 00:11:42.217 "data_size": 63488 00:11:42.217 }, 00:11:42.217 { 00:11:42.217 "name": "BaseBdev2", 00:11:42.217 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:42.217 "is_configured": true, 00:11:42.217 "data_offset": 2048, 00:11:42.217 "data_size": 63488 00:11:42.217 }, 00:11:42.217 { 00:11:42.217 "name": "BaseBdev3", 00:11:42.217 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:42.217 "is_configured": true, 00:11:42.217 "data_offset": 2048, 00:11:42.217 "data_size": 63488 00:11:42.217 } 00:11:42.217 ] 00:11:42.217 }' 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.217 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.781 [2024-10-08 16:19:35.930609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.781 "name": "Existed_Raid", 00:11:42.781 "aliases": [ 00:11:42.781 "7c5d0d66-d8fa-4264-a354-9df49798948c" 00:11:42.781 ], 00:11:42.781 "product_name": "Raid Volume", 00:11:42.781 "block_size": 512, 00:11:42.781 "num_blocks": 63488, 00:11:42.781 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:42.781 "assigned_rate_limits": { 00:11:42.781 "rw_ios_per_sec": 0, 00:11:42.781 "rw_mbytes_per_sec": 0, 00:11:42.781 "r_mbytes_per_sec": 0, 00:11:42.781 "w_mbytes_per_sec": 0 00:11:42.781 }, 00:11:42.781 "claimed": false, 00:11:42.781 "zoned": false, 00:11:42.781 "supported_io_types": { 00:11:42.781 "read": true, 00:11:42.781 "write": true, 00:11:42.781 "unmap": false, 00:11:42.781 "flush": false, 00:11:42.781 "reset": true, 00:11:42.781 "nvme_admin": false, 00:11:42.781 "nvme_io": false, 00:11:42.781 "nvme_io_md": false, 00:11:42.781 "write_zeroes": true, 00:11:42.781 "zcopy": false, 00:11:42.781 "get_zone_info": false, 00:11:42.781 "zone_management": false, 00:11:42.781 "zone_append": false, 00:11:42.781 "compare": false, 00:11:42.781 "compare_and_write": false, 00:11:42.781 "abort": false, 00:11:42.781 "seek_hole": false, 00:11:42.781 "seek_data": false, 00:11:42.781 "copy": false, 00:11:42.781 "nvme_iov_md": false 00:11:42.781 }, 00:11:42.781 "memory_domains": [ 00:11:42.781 { 00:11:42.781 "dma_device_id": "system", 00:11:42.781 "dma_device_type": 1 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.781 "dma_device_type": 2 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "dma_device_id": "system", 00:11:42.781 "dma_device_type": 1 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.781 "dma_device_type": 2 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "dma_device_id": "system", 00:11:42.781 "dma_device_type": 1 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.781 "dma_device_type": 2 00:11:42.781 } 00:11:42.781 ], 00:11:42.781 "driver_specific": { 00:11:42.781 "raid": { 00:11:42.781 "uuid": "7c5d0d66-d8fa-4264-a354-9df49798948c", 00:11:42.781 "strip_size_kb": 0, 00:11:42.781 "state": "online", 00:11:42.781 "raid_level": "raid1", 00:11:42.781 "superblock": true, 00:11:42.781 "num_base_bdevs": 3, 00:11:42.781 "num_base_bdevs_discovered": 3, 00:11:42.781 "num_base_bdevs_operational": 3, 00:11:42.781 "base_bdevs_list": [ 00:11:42.781 { 00:11:42.781 "name": "NewBaseBdev", 00:11:42.781 "uuid": "5267b489-73db-4fbf-95e9-867af623f535", 00:11:42.781 "is_configured": true, 00:11:42.781 "data_offset": 2048, 00:11:42.781 "data_size": 63488 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "name": "BaseBdev2", 00:11:42.781 "uuid": "7101b90b-b884-4d44-b419-3586dc40b374", 00:11:42.781 "is_configured": true, 00:11:42.781 "data_offset": 2048, 00:11:42.781 "data_size": 63488 00:11:42.781 }, 00:11:42.781 { 00:11:42.781 "name": "BaseBdev3", 00:11:42.781 "uuid": "93bc6dd1-9dd6-4968-8080-5234bcfc36e1", 00:11:42.781 "is_configured": true, 00:11:42.781 "data_offset": 2048, 00:11:42.781 "data_size": 63488 00:11:42.781 } 00:11:42.781 ] 00:11:42.781 } 00:11:42.781 } 00:11:42.781 }' 00:11:42.781 16:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.781 BaseBdev2 00:11:42.781 BaseBdev3' 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.781 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.039 [2024-10-08 16:19:36.246160] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.039 [2024-10-08 16:19:36.246319] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.039 [2024-10-08 16:19:36.246436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.039 [2024-10-08 16:19:36.246847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.039 [2024-10-08 16:19:36.246866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68407 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68407 ']' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68407 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68407 00:11:43.039 killing process with pid 68407 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68407' 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68407 00:11:43.039 [2024-10-08 16:19:36.283023] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.039 16:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68407 00:11:43.296 [2024-10-08 16:19:36.563211] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.706 16:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.706 00:11:44.706 real 0m11.834s 00:11:44.706 user 0m19.284s 00:11:44.706 sys 0m1.633s 00:11:44.706 ************************************ 00:11:44.706 END TEST raid_state_function_test_sb 00:11:44.706 ************************************ 00:11:44.706 16:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.706 16:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 16:19:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:44.706 16:19:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:44.706 16:19:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.706 16:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 ************************************ 00:11:44.706 START TEST raid_superblock_test 00:11:44.706 ************************************ 00:11:44.706 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:11:44.706 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:44.706 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:44.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69033 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69033 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 69033 ']' 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.707 16:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.707 [2024-10-08 16:19:38.014138] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:44.707 [2024-10-08 16:19:38.014944] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69033 ] 00:11:44.964 [2024-10-08 16:19:38.203622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.222 [2024-10-08 16:19:38.484738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.479 [2024-10-08 16:19:38.708144] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.479 [2024-10-08 16:19:38.708239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.737 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.994 malloc1 00:11:45.994 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.994 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.994 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 [2024-10-08 16:19:39.078801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.995 [2024-10-08 16:19:39.078895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.995 [2024-10-08 16:19:39.078929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.995 [2024-10-08 16:19:39.078948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.995 [2024-10-08 16:19:39.081952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.995 [2024-10-08 16:19:39.082000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.995 pt1 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 malloc2 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 [2024-10-08 16:19:39.148973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.995 [2024-10-08 16:19:39.149056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.995 [2024-10-08 16:19:39.149093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.995 [2024-10-08 16:19:39.149110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.995 [2024-10-08 16:19:39.152093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.995 [2024-10-08 16:19:39.152278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.995 pt2 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 malloc3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 [2024-10-08 16:19:39.208651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.995 [2024-10-08 16:19:39.208727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.995 [2024-10-08 16:19:39.208762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.995 [2024-10-08 16:19:39.208778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.995 [2024-10-08 16:19:39.211676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.995 [2024-10-08 16:19:39.211720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.995 pt3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 [2024-10-08 16:19:39.216716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.995 [2024-10-08 16:19:39.219233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.995 [2024-10-08 16:19:39.219474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.995 [2024-10-08 16:19:39.219727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.995 [2024-10-08 16:19:39.219752] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.995 [2024-10-08 16:19:39.220058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:45.995 [2024-10-08 16:19:39.220280] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.995 [2024-10-08 16:19:39.220298] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.995 [2024-10-08 16:19:39.220486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.995 "name": "raid_bdev1", 00:11:45.995 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:45.995 "strip_size_kb": 0, 00:11:45.995 "state": "online", 00:11:45.995 "raid_level": "raid1", 00:11:45.995 "superblock": true, 00:11:45.995 "num_base_bdevs": 3, 00:11:45.995 "num_base_bdevs_discovered": 3, 00:11:45.995 "num_base_bdevs_operational": 3, 00:11:45.995 "base_bdevs_list": [ 00:11:45.995 { 00:11:45.995 "name": "pt1", 00:11:45.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.995 "is_configured": true, 00:11:45.995 "data_offset": 2048, 00:11:45.995 "data_size": 63488 00:11:45.995 }, 00:11:45.995 { 00:11:45.995 "name": "pt2", 00:11:45.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.995 "is_configured": true, 00:11:45.995 "data_offset": 2048, 00:11:45.995 "data_size": 63488 00:11:45.995 }, 00:11:45.995 { 00:11:45.995 "name": "pt3", 00:11:45.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.995 "is_configured": true, 00:11:45.995 "data_offset": 2048, 00:11:45.995 "data_size": 63488 00:11:45.995 } 00:11:45.995 ] 00:11:45.995 }' 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.995 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.559 [2024-10-08 16:19:39.709259] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.559 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.559 "name": "raid_bdev1", 00:11:46.559 "aliases": [ 00:11:46.559 "21f971a1-9a6e-4067-a8b5-e0e2129b9aae" 00:11:46.559 ], 00:11:46.559 "product_name": "Raid Volume", 00:11:46.559 "block_size": 512, 00:11:46.559 "num_blocks": 63488, 00:11:46.559 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:46.559 "assigned_rate_limits": { 00:11:46.559 "rw_ios_per_sec": 0, 00:11:46.559 "rw_mbytes_per_sec": 0, 00:11:46.559 "r_mbytes_per_sec": 0, 00:11:46.559 "w_mbytes_per_sec": 0 00:11:46.559 }, 00:11:46.559 "claimed": false, 00:11:46.559 "zoned": false, 00:11:46.559 "supported_io_types": { 00:11:46.559 "read": true, 00:11:46.559 "write": true, 00:11:46.559 "unmap": false, 00:11:46.559 "flush": false, 00:11:46.559 "reset": true, 00:11:46.559 "nvme_admin": false, 00:11:46.559 "nvme_io": false, 00:11:46.559 "nvme_io_md": false, 00:11:46.559 "write_zeroes": true, 00:11:46.559 "zcopy": false, 00:11:46.559 "get_zone_info": false, 00:11:46.559 "zone_management": false, 00:11:46.559 "zone_append": false, 00:11:46.559 "compare": false, 00:11:46.559 "compare_and_write": false, 00:11:46.559 "abort": false, 00:11:46.559 "seek_hole": false, 00:11:46.559 "seek_data": false, 00:11:46.559 "copy": false, 00:11:46.559 "nvme_iov_md": false 00:11:46.559 }, 00:11:46.559 "memory_domains": [ 00:11:46.559 { 00:11:46.560 "dma_device_id": "system", 00:11:46.560 "dma_device_type": 1 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.560 "dma_device_type": 2 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "dma_device_id": "system", 00:11:46.560 "dma_device_type": 1 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.560 "dma_device_type": 2 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "dma_device_id": "system", 00:11:46.560 "dma_device_type": 1 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.560 "dma_device_type": 2 00:11:46.560 } 00:11:46.560 ], 00:11:46.560 "driver_specific": { 00:11:46.560 "raid": { 00:11:46.560 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:46.560 "strip_size_kb": 0, 00:11:46.560 "state": "online", 00:11:46.560 "raid_level": "raid1", 00:11:46.560 "superblock": true, 00:11:46.560 "num_base_bdevs": 3, 00:11:46.560 "num_base_bdevs_discovered": 3, 00:11:46.560 "num_base_bdevs_operational": 3, 00:11:46.560 "base_bdevs_list": [ 00:11:46.560 { 00:11:46.560 "name": "pt1", 00:11:46.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.560 "is_configured": true, 00:11:46.560 "data_offset": 2048, 00:11:46.560 "data_size": 63488 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "name": "pt2", 00:11:46.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.560 "is_configured": true, 00:11:46.560 "data_offset": 2048, 00:11:46.560 "data_size": 63488 00:11:46.560 }, 00:11:46.560 { 00:11:46.560 "name": "pt3", 00:11:46.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.560 "is_configured": true, 00:11:46.560 "data_offset": 2048, 00:11:46.560 "data_size": 63488 00:11:46.560 } 00:11:46.560 ] 00:11:46.560 } 00:11:46.560 } 00:11:46.560 }' 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.560 pt2 00:11:46.560 pt3' 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.560 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.817 [2024-10-08 16:19:40.009253] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=21f971a1-9a6e-4067-a8b5-e0e2129b9aae 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 21f971a1-9a6e-4067-a8b5-e0e2129b9aae ']' 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 [2024-10-08 16:19:40.052880] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.817 [2024-10-08 16:19:40.052920] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.817 [2024-10-08 16:19:40.053021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.817 [2024-10-08 16:19:40.053132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.817 [2024-10-08 16:19:40.053149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.817 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.818 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.818 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.818 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.818 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.818 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.075 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.075 [2024-10-08 16:19:40.192942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:47.075 [2024-10-08 16:19:40.195601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:47.075 [2024-10-08 16:19:40.195819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:47.075 [2024-10-08 16:19:40.195914] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:47.075 [2024-10-08 16:19:40.195992] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:47.075 [2024-10-08 16:19:40.196028] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:47.075 [2024-10-08 16:19:40.196056] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.075 [2024-10-08 16:19:40.196071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:47.075 request: 00:11:47.075 { 00:11:47.075 "name": "raid_bdev1", 00:11:47.076 "raid_level": "raid1", 00:11:47.076 "base_bdevs": [ 00:11:47.076 "malloc1", 00:11:47.076 "malloc2", 00:11:47.076 "malloc3" 00:11:47.076 ], 00:11:47.076 "superblock": false, 00:11:47.076 "method": "bdev_raid_create", 00:11:47.076 "req_id": 1 00:11:47.076 } 00:11:47.076 Got JSON-RPC error response 00:11:47.076 response: 00:11:47.076 { 00:11:47.076 "code": -17, 00:11:47.076 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:47.076 } 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.076 [2024-10-08 16:19:40.260939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.076 [2024-10-08 16:19:40.261141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.076 [2024-10-08 16:19:40.261291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.076 [2024-10-08 16:19:40.261404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.076 [2024-10-08 16:19:40.264709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.076 [2024-10-08 16:19:40.264869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.076 [2024-10-08 16:19:40.265082] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:47.076 [2024-10-08 16:19:40.265264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.076 pt1 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.076 "name": "raid_bdev1", 00:11:47.076 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:47.076 "strip_size_kb": 0, 00:11:47.076 "state": "configuring", 00:11:47.076 "raid_level": "raid1", 00:11:47.076 "superblock": true, 00:11:47.076 "num_base_bdevs": 3, 00:11:47.076 "num_base_bdevs_discovered": 1, 00:11:47.076 "num_base_bdevs_operational": 3, 00:11:47.076 "base_bdevs_list": [ 00:11:47.076 { 00:11:47.076 "name": "pt1", 00:11:47.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.076 "is_configured": true, 00:11:47.076 "data_offset": 2048, 00:11:47.076 "data_size": 63488 00:11:47.076 }, 00:11:47.076 { 00:11:47.076 "name": null, 00:11:47.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.076 "is_configured": false, 00:11:47.076 "data_offset": 2048, 00:11:47.076 "data_size": 63488 00:11:47.076 }, 00:11:47.076 { 00:11:47.076 "name": null, 00:11:47.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.076 "is_configured": false, 00:11:47.076 "data_offset": 2048, 00:11:47.076 "data_size": 63488 00:11:47.076 } 00:11:47.076 ] 00:11:47.076 }' 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.076 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 [2024-10-08 16:19:40.781344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.640 [2024-10-08 16:19:40.781438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.640 [2024-10-08 16:19:40.781478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:47.640 [2024-10-08 16:19:40.781496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.640 [2024-10-08 16:19:40.782157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.640 [2024-10-08 16:19:40.782201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.640 [2024-10-08 16:19:40.782331] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.640 [2024-10-08 16:19:40.782367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.640 pt2 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 [2024-10-08 16:19:40.789301] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.640 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.641 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.641 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.641 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.641 "name": "raid_bdev1", 00:11:47.641 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:47.641 "strip_size_kb": 0, 00:11:47.641 "state": "configuring", 00:11:47.641 "raid_level": "raid1", 00:11:47.641 "superblock": true, 00:11:47.641 "num_base_bdevs": 3, 00:11:47.641 "num_base_bdevs_discovered": 1, 00:11:47.641 "num_base_bdevs_operational": 3, 00:11:47.641 "base_bdevs_list": [ 00:11:47.641 { 00:11:47.641 "name": "pt1", 00:11:47.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.641 "is_configured": true, 00:11:47.641 "data_offset": 2048, 00:11:47.641 "data_size": 63488 00:11:47.641 }, 00:11:47.641 { 00:11:47.641 "name": null, 00:11:47.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.641 "is_configured": false, 00:11:47.641 "data_offset": 0, 00:11:47.641 "data_size": 63488 00:11:47.641 }, 00:11:47.641 { 00:11:47.641 "name": null, 00:11:47.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.641 "is_configured": false, 00:11:47.641 "data_offset": 2048, 00:11:47.641 "data_size": 63488 00:11:47.641 } 00:11:47.641 ] 00:11:47.641 }' 00:11:47.641 16:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.641 16:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.207 [2024-10-08 16:19:41.309434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.207 [2024-10-08 16:19:41.309563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.207 [2024-10-08 16:19:41.309600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:48.207 [2024-10-08 16:19:41.309620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.207 [2024-10-08 16:19:41.310265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.207 [2024-10-08 16:19:41.310297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.207 [2024-10-08 16:19:41.310428] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.207 [2024-10-08 16:19:41.310491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.207 pt2 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.207 [2024-10-08 16:19:41.317400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.207 [2024-10-08 16:19:41.317625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.207 [2024-10-08 16:19:41.317789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.207 [2024-10-08 16:19:41.317921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.207 [2024-10-08 16:19:41.318532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.207 [2024-10-08 16:19:41.318695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.207 [2024-10-08 16:19:41.318928] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:48.207 [2024-10-08 16:19:41.318975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.207 [2024-10-08 16:19:41.319137] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.207 [2024-10-08 16:19:41.319159] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.207 [2024-10-08 16:19:41.319479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.207 [2024-10-08 16:19:41.319712] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.207 [2024-10-08 16:19:41.319730] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:48.207 [2024-10-08 16:19:41.319909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.207 pt3 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.207 "name": "raid_bdev1", 00:11:48.207 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:48.207 "strip_size_kb": 0, 00:11:48.207 "state": "online", 00:11:48.207 "raid_level": "raid1", 00:11:48.207 "superblock": true, 00:11:48.207 "num_base_bdevs": 3, 00:11:48.207 "num_base_bdevs_discovered": 3, 00:11:48.207 "num_base_bdevs_operational": 3, 00:11:48.207 "base_bdevs_list": [ 00:11:48.207 { 00:11:48.207 "name": "pt1", 00:11:48.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.207 "is_configured": true, 00:11:48.207 "data_offset": 2048, 00:11:48.207 "data_size": 63488 00:11:48.207 }, 00:11:48.207 { 00:11:48.207 "name": "pt2", 00:11:48.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.207 "is_configured": true, 00:11:48.207 "data_offset": 2048, 00:11:48.207 "data_size": 63488 00:11:48.207 }, 00:11:48.207 { 00:11:48.207 "name": "pt3", 00:11:48.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.207 "is_configured": true, 00:11:48.207 "data_offset": 2048, 00:11:48.207 "data_size": 63488 00:11:48.207 } 00:11:48.207 ] 00:11:48.207 }' 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.207 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.774 [2024-10-08 16:19:41.801974] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.774 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.774 "name": "raid_bdev1", 00:11:48.774 "aliases": [ 00:11:48.774 "21f971a1-9a6e-4067-a8b5-e0e2129b9aae" 00:11:48.774 ], 00:11:48.774 "product_name": "Raid Volume", 00:11:48.774 "block_size": 512, 00:11:48.774 "num_blocks": 63488, 00:11:48.774 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:48.774 "assigned_rate_limits": { 00:11:48.774 "rw_ios_per_sec": 0, 00:11:48.774 "rw_mbytes_per_sec": 0, 00:11:48.774 "r_mbytes_per_sec": 0, 00:11:48.774 "w_mbytes_per_sec": 0 00:11:48.774 }, 00:11:48.774 "claimed": false, 00:11:48.774 "zoned": false, 00:11:48.774 "supported_io_types": { 00:11:48.774 "read": true, 00:11:48.774 "write": true, 00:11:48.774 "unmap": false, 00:11:48.774 "flush": false, 00:11:48.774 "reset": true, 00:11:48.774 "nvme_admin": false, 00:11:48.774 "nvme_io": false, 00:11:48.774 "nvme_io_md": false, 00:11:48.774 "write_zeroes": true, 00:11:48.774 "zcopy": false, 00:11:48.774 "get_zone_info": false, 00:11:48.774 "zone_management": false, 00:11:48.774 "zone_append": false, 00:11:48.774 "compare": false, 00:11:48.774 "compare_and_write": false, 00:11:48.774 "abort": false, 00:11:48.774 "seek_hole": false, 00:11:48.774 "seek_data": false, 00:11:48.774 "copy": false, 00:11:48.774 "nvme_iov_md": false 00:11:48.774 }, 00:11:48.774 "memory_domains": [ 00:11:48.774 { 00:11:48.774 "dma_device_id": "system", 00:11:48.774 "dma_device_type": 1 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.774 "dma_device_type": 2 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "dma_device_id": "system", 00:11:48.774 "dma_device_type": 1 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.774 "dma_device_type": 2 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "dma_device_id": "system", 00:11:48.774 "dma_device_type": 1 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.774 "dma_device_type": 2 00:11:48.774 } 00:11:48.774 ], 00:11:48.774 "driver_specific": { 00:11:48.774 "raid": { 00:11:48.774 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:48.774 "strip_size_kb": 0, 00:11:48.774 "state": "online", 00:11:48.774 "raid_level": "raid1", 00:11:48.774 "superblock": true, 00:11:48.774 "num_base_bdevs": 3, 00:11:48.774 "num_base_bdevs_discovered": 3, 00:11:48.774 "num_base_bdevs_operational": 3, 00:11:48.774 "base_bdevs_list": [ 00:11:48.774 { 00:11:48.774 "name": "pt1", 00:11:48.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.774 "is_configured": true, 00:11:48.774 "data_offset": 2048, 00:11:48.774 "data_size": 63488 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "name": "pt2", 00:11:48.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.774 "is_configured": true, 00:11:48.774 "data_offset": 2048, 00:11:48.774 "data_size": 63488 00:11:48.774 }, 00:11:48.774 { 00:11:48.774 "name": "pt3", 00:11:48.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.774 "is_configured": true, 00:11:48.774 "data_offset": 2048, 00:11:48.774 "data_size": 63488 00:11:48.774 } 00:11:48.774 ] 00:11:48.774 } 00:11:48.774 } 00:11:48.774 }' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.775 pt2 00:11:48.775 pt3' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.775 16:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.775 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.775 [2024-10-08 16:19:42.081946] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 21f971a1-9a6e-4067-a8b5-e0e2129b9aae '!=' 21f971a1-9a6e-4067-a8b5-e0e2129b9aae ']' 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.032 [2024-10-08 16:19:42.125701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.032 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.032 "name": "raid_bdev1", 00:11:49.032 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:49.032 "strip_size_kb": 0, 00:11:49.032 "state": "online", 00:11:49.032 "raid_level": "raid1", 00:11:49.032 "superblock": true, 00:11:49.032 "num_base_bdevs": 3, 00:11:49.032 "num_base_bdevs_discovered": 2, 00:11:49.032 "num_base_bdevs_operational": 2, 00:11:49.032 "base_bdevs_list": [ 00:11:49.032 { 00:11:49.032 "name": null, 00:11:49.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.032 "is_configured": false, 00:11:49.032 "data_offset": 0, 00:11:49.032 "data_size": 63488 00:11:49.032 }, 00:11:49.032 { 00:11:49.032 "name": "pt2", 00:11:49.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.033 "is_configured": true, 00:11:49.033 "data_offset": 2048, 00:11:49.033 "data_size": 63488 00:11:49.033 }, 00:11:49.033 { 00:11:49.033 "name": "pt3", 00:11:49.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.033 "is_configured": true, 00:11:49.033 "data_offset": 2048, 00:11:49.033 "data_size": 63488 00:11:49.033 } 00:11:49.033 ] 00:11:49.033 }' 00:11:49.033 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.033 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.290 [2024-10-08 16:19:42.569756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.290 [2024-10-08 16:19:42.569798] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.290 [2024-10-08 16:19:42.569917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.290 [2024-10-08 16:19:42.570007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.290 [2024-10-08 16:19:42.570032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:49.290 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.548 [2024-10-08 16:19:42.645701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.548 [2024-10-08 16:19:42.645777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.548 [2024-10-08 16:19:42.645805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:49.548 [2024-10-08 16:19:42.645824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.548 [2024-10-08 16:19:42.648951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.548 [2024-10-08 16:19:42.649003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.548 [2024-10-08 16:19:42.649109] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.548 [2024-10-08 16:19:42.649178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.548 pt2 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.548 "name": "raid_bdev1", 00:11:49.548 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:49.548 "strip_size_kb": 0, 00:11:49.548 "state": "configuring", 00:11:49.548 "raid_level": "raid1", 00:11:49.548 "superblock": true, 00:11:49.548 "num_base_bdevs": 3, 00:11:49.548 "num_base_bdevs_discovered": 1, 00:11:49.548 "num_base_bdevs_operational": 2, 00:11:49.548 "base_bdevs_list": [ 00:11:49.548 { 00:11:49.548 "name": null, 00:11:49.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.548 "is_configured": false, 00:11:49.548 "data_offset": 2048, 00:11:49.548 "data_size": 63488 00:11:49.548 }, 00:11:49.548 { 00:11:49.548 "name": "pt2", 00:11:49.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.548 "is_configured": true, 00:11:49.548 "data_offset": 2048, 00:11:49.548 "data_size": 63488 00:11:49.548 }, 00:11:49.548 { 00:11:49.548 "name": null, 00:11:49.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.548 "is_configured": false, 00:11:49.548 "data_offset": 2048, 00:11:49.548 "data_size": 63488 00:11:49.548 } 00:11:49.548 ] 00:11:49.548 }' 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.548 16:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 [2024-10-08 16:19:43.169930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:50.114 [2024-10-08 16:19:43.170032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.114 [2024-10-08 16:19:43.170069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:50.114 [2024-10-08 16:19:43.170089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.114 [2024-10-08 16:19:43.170784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.114 [2024-10-08 16:19:43.170825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:50.114 [2024-10-08 16:19:43.170947] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:50.114 [2024-10-08 16:19:43.170995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.114 [2024-10-08 16:19:43.171170] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:50.114 [2024-10-08 16:19:43.171200] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.114 [2024-10-08 16:19:43.171565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:50.114 [2024-10-08 16:19:43.171770] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:50.114 [2024-10-08 16:19:43.171786] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:50.114 [2024-10-08 16:19:43.171966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.114 pt3 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.114 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.115 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.115 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.115 "name": "raid_bdev1", 00:11:50.115 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:50.115 "strip_size_kb": 0, 00:11:50.115 "state": "online", 00:11:50.115 "raid_level": "raid1", 00:11:50.115 "superblock": true, 00:11:50.115 "num_base_bdevs": 3, 00:11:50.115 "num_base_bdevs_discovered": 2, 00:11:50.115 "num_base_bdevs_operational": 2, 00:11:50.115 "base_bdevs_list": [ 00:11:50.115 { 00:11:50.115 "name": null, 00:11:50.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.115 "is_configured": false, 00:11:50.115 "data_offset": 2048, 00:11:50.115 "data_size": 63488 00:11:50.115 }, 00:11:50.115 { 00:11:50.115 "name": "pt2", 00:11:50.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.115 "is_configured": true, 00:11:50.115 "data_offset": 2048, 00:11:50.115 "data_size": 63488 00:11:50.115 }, 00:11:50.115 { 00:11:50.115 "name": "pt3", 00:11:50.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.115 "is_configured": true, 00:11:50.115 "data_offset": 2048, 00:11:50.115 "data_size": 63488 00:11:50.115 } 00:11:50.115 ] 00:11:50.115 }' 00:11:50.115 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.115 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.392 [2024-10-08 16:19:43.662006] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.392 [2024-10-08 16:19:43.662195] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.392 [2024-10-08 16:19:43.662339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.392 [2024-10-08 16:19:43.662438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.392 [2024-10-08 16:19:43.662455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.392 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.649 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.649 [2024-10-08 16:19:43.742077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.649 [2024-10-08 16:19:43.742178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.649 [2024-10-08 16:19:43.742220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:50.649 [2024-10-08 16:19:43.742238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.649 [2024-10-08 16:19:43.745559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.649 [2024-10-08 16:19:43.745605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.649 [2024-10-08 16:19:43.745743] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:50.649 [2024-10-08 16:19:43.745817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.649 [2024-10-08 16:19:43.746000] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:50.649 [2024-10-08 16:19:43.746032] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.650 [2024-10-08 16:19:43.746059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:50.650 [2024-10-08 16:19:43.746132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.650 pt1 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.650 "name": "raid_bdev1", 00:11:50.650 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:50.650 "strip_size_kb": 0, 00:11:50.650 "state": "configuring", 00:11:50.650 "raid_level": "raid1", 00:11:50.650 "superblock": true, 00:11:50.650 "num_base_bdevs": 3, 00:11:50.650 "num_base_bdevs_discovered": 1, 00:11:50.650 "num_base_bdevs_operational": 2, 00:11:50.650 "base_bdevs_list": [ 00:11:50.650 { 00:11:50.650 "name": null, 00:11:50.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.650 "is_configured": false, 00:11:50.650 "data_offset": 2048, 00:11:50.650 "data_size": 63488 00:11:50.650 }, 00:11:50.650 { 00:11:50.650 "name": "pt2", 00:11:50.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.650 "is_configured": true, 00:11:50.650 "data_offset": 2048, 00:11:50.650 "data_size": 63488 00:11:50.650 }, 00:11:50.650 { 00:11:50.650 "name": null, 00:11:50.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.650 "is_configured": false, 00:11:50.650 "data_offset": 2048, 00:11:50.650 "data_size": 63488 00:11:50.650 } 00:11:50.650 ] 00:11:50.650 }' 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.650 16:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.907 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:50.907 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.907 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.907 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.164 [2024-10-08 16:19:44.266312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:51.164 [2024-10-08 16:19:44.266420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.164 [2024-10-08 16:19:44.266456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:51.164 [2024-10-08 16:19:44.266473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.164 [2024-10-08 16:19:44.267068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.164 [2024-10-08 16:19:44.267104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:51.164 [2024-10-08 16:19:44.267218] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:51.164 [2024-10-08 16:19:44.267290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:51.164 [2024-10-08 16:19:44.267465] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:51.164 [2024-10-08 16:19:44.267488] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.164 [2024-10-08 16:19:44.267845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:51.164 [2024-10-08 16:19:44.268063] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:51.164 [2024-10-08 16:19:44.268094] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:51.164 [2024-10-08 16:19:44.268268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.164 pt3 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.164 "name": "raid_bdev1", 00:11:51.164 "uuid": "21f971a1-9a6e-4067-a8b5-e0e2129b9aae", 00:11:51.164 "strip_size_kb": 0, 00:11:51.164 "state": "online", 00:11:51.164 "raid_level": "raid1", 00:11:51.164 "superblock": true, 00:11:51.164 "num_base_bdevs": 3, 00:11:51.164 "num_base_bdevs_discovered": 2, 00:11:51.164 "num_base_bdevs_operational": 2, 00:11:51.164 "base_bdevs_list": [ 00:11:51.164 { 00:11:51.164 "name": null, 00:11:51.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.164 "is_configured": false, 00:11:51.164 "data_offset": 2048, 00:11:51.164 "data_size": 63488 00:11:51.164 }, 00:11:51.164 { 00:11:51.164 "name": "pt2", 00:11:51.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.164 "is_configured": true, 00:11:51.164 "data_offset": 2048, 00:11:51.164 "data_size": 63488 00:11:51.164 }, 00:11:51.164 { 00:11:51.164 "name": "pt3", 00:11:51.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.164 "is_configured": true, 00:11:51.164 "data_offset": 2048, 00:11:51.164 "data_size": 63488 00:11:51.164 } 00:11:51.164 ] 00:11:51.164 }' 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.164 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 [2024-10-08 16:19:44.818866] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 21f971a1-9a6e-4067-a8b5-e0e2129b9aae '!=' 21f971a1-9a6e-4067-a8b5-e0e2129b9aae ']' 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69033 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 69033 ']' 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 69033 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69033 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:51.733 killing process with pid 69033 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69033' 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 69033 00:11:51.733 16:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 69033 00:11:51.733 [2024-10-08 16:19:44.898805] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.733 [2024-10-08 16:19:44.898943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.733 [2024-10-08 16:19:44.899032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.733 [2024-10-08 16:19:44.899070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:51.991 [2024-10-08 16:19:45.176917] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.366 16:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:53.366 00:11:53.366 real 0m8.514s 00:11:53.366 user 0m13.624s 00:11:53.366 sys 0m1.224s 00:11:53.366 16:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.366 16:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.366 ************************************ 00:11:53.366 END TEST raid_superblock_test 00:11:53.366 ************************************ 00:11:53.366 16:19:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:53.366 16:19:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:53.366 16:19:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.366 16:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.366 ************************************ 00:11:53.366 START TEST raid_read_error_test 00:11:53.366 ************************************ 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RjTsqsVZ1o 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69492 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69492 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69492 ']' 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.366 16:19:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.366 [2024-10-08 16:19:46.591171] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:53.366 [2024-10-08 16:19:46.591345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69492 ] 00:11:53.624 [2024-10-08 16:19:46.763077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.882 [2024-10-08 16:19:47.008700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.139 [2024-10-08 16:19:47.218629] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.139 [2024-10-08 16:19:47.218721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.397 BaseBdev1_malloc 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.397 true 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.397 [2024-10-08 16:19:47.684716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:54.397 [2024-10-08 16:19:47.684800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.397 [2024-10-08 16:19:47.684827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:54.397 [2024-10-08 16:19:47.684846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.397 [2024-10-08 16:19:47.687821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.397 [2024-10-08 16:19:47.687868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:54.397 BaseBdev1 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.397 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 BaseBdev2_malloc 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 true 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 [2024-10-08 16:19:47.750170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:54.656 [2024-10-08 16:19:47.750263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.656 [2024-10-08 16:19:47.750291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:54.656 [2024-10-08 16:19:47.750309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.656 [2024-10-08 16:19:47.753271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.656 [2024-10-08 16:19:47.753327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:54.656 BaseBdev2 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 BaseBdev3_malloc 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 true 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 [2024-10-08 16:19:47.807089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:54.656 [2024-10-08 16:19:47.807184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.656 [2024-10-08 16:19:47.807212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:54.656 [2024-10-08 16:19:47.807230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.656 [2024-10-08 16:19:47.810212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.656 [2024-10-08 16:19:47.810255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:54.656 BaseBdev3 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 [2024-10-08 16:19:47.815230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.656 [2024-10-08 16:19:47.817767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.656 [2024-10-08 16:19:47.817885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.656 [2024-10-08 16:19:47.818154] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:54.656 [2024-10-08 16:19:47.818181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.656 [2024-10-08 16:19:47.818510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:54.656 [2024-10-08 16:19:47.818753] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:54.656 [2024-10-08 16:19:47.818798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:54.656 [2024-10-08 16:19:47.818991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.656 "name": "raid_bdev1", 00:11:54.656 "uuid": "d94d33bf-3683-4f41-bb78-f3e33e1e36ab", 00:11:54.656 "strip_size_kb": 0, 00:11:54.656 "state": "online", 00:11:54.656 "raid_level": "raid1", 00:11:54.656 "superblock": true, 00:11:54.656 "num_base_bdevs": 3, 00:11:54.656 "num_base_bdevs_discovered": 3, 00:11:54.656 "num_base_bdevs_operational": 3, 00:11:54.656 "base_bdevs_list": [ 00:11:54.656 { 00:11:54.656 "name": "BaseBdev1", 00:11:54.656 "uuid": "3e9633b5-e2b7-51a0-b164-d39e4cbc4818", 00:11:54.656 "is_configured": true, 00:11:54.656 "data_offset": 2048, 00:11:54.656 "data_size": 63488 00:11:54.656 }, 00:11:54.656 { 00:11:54.656 "name": "BaseBdev2", 00:11:54.656 "uuid": "e4a1363f-3fb0-5e92-8cbd-f972b9538ecd", 00:11:54.656 "is_configured": true, 00:11:54.656 "data_offset": 2048, 00:11:54.656 "data_size": 63488 00:11:54.656 }, 00:11:54.656 { 00:11:54.656 "name": "BaseBdev3", 00:11:54.656 "uuid": "3be4872e-718d-58da-9971-8f57aca2eea1", 00:11:54.656 "is_configured": true, 00:11:54.656 "data_offset": 2048, 00:11:54.656 "data_size": 63488 00:11:54.656 } 00:11:54.656 ] 00:11:54.656 }' 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.656 16:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.225 16:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:55.225 16:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:55.225 [2024-10-08 16:19:48.464777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.186 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.186 "name": "raid_bdev1", 00:11:56.186 "uuid": "d94d33bf-3683-4f41-bb78-f3e33e1e36ab", 00:11:56.186 "strip_size_kb": 0, 00:11:56.186 "state": "online", 00:11:56.186 "raid_level": "raid1", 00:11:56.186 "superblock": true, 00:11:56.186 "num_base_bdevs": 3, 00:11:56.186 "num_base_bdevs_discovered": 3, 00:11:56.186 "num_base_bdevs_operational": 3, 00:11:56.186 "base_bdevs_list": [ 00:11:56.186 { 00:11:56.186 "name": "BaseBdev1", 00:11:56.186 "uuid": "3e9633b5-e2b7-51a0-b164-d39e4cbc4818", 00:11:56.186 "is_configured": true, 00:11:56.186 "data_offset": 2048, 00:11:56.186 "data_size": 63488 00:11:56.186 }, 00:11:56.186 { 00:11:56.186 "name": "BaseBdev2", 00:11:56.186 "uuid": "e4a1363f-3fb0-5e92-8cbd-f972b9538ecd", 00:11:56.186 "is_configured": true, 00:11:56.186 "data_offset": 2048, 00:11:56.186 "data_size": 63488 00:11:56.186 }, 00:11:56.186 { 00:11:56.186 "name": "BaseBdev3", 00:11:56.186 "uuid": "3be4872e-718d-58da-9971-8f57aca2eea1", 00:11:56.187 "is_configured": true, 00:11:56.187 "data_offset": 2048, 00:11:56.187 "data_size": 63488 00:11:56.187 } 00:11:56.187 ] 00:11:56.187 }' 00:11:56.187 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.187 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.751 [2024-10-08 16:19:49.897761] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.751 [2024-10-08 16:19:49.897822] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.751 [2024-10-08 16:19:49.901222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.751 [2024-10-08 16:19:49.901286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.751 [2024-10-08 16:19:49.901426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.751 [2024-10-08 16:19:49.901462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:56.751 { 00:11:56.751 "results": [ 00:11:56.751 { 00:11:56.751 "job": "raid_bdev1", 00:11:56.751 "core_mask": "0x1", 00:11:56.751 "workload": "randrw", 00:11:56.751 "percentage": 50, 00:11:56.751 "status": "finished", 00:11:56.751 "queue_depth": 1, 00:11:56.751 "io_size": 131072, 00:11:56.751 "runtime": 1.430745, 00:11:56.751 "iops": 8960.366801910892, 00:11:56.751 "mibps": 1120.0458502388615, 00:11:56.751 "io_failed": 0, 00:11:56.751 "io_timeout": 0, 00:11:56.751 "avg_latency_us": 107.3565372287619, 00:11:56.751 "min_latency_us": 40.49454545454545, 00:11:56.751 "max_latency_us": 1921.3963636363637 00:11:56.751 } 00:11:56.751 ], 00:11:56.751 "core_count": 1 00:11:56.751 } 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69492 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69492 ']' 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69492 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:56.751 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.752 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69492 00:11:56.752 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.752 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.752 killing process with pid 69492 00:11:56.752 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69492' 00:11:56.752 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69492 00:11:56.752 [2024-10-08 16:19:49.937952] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.752 16:19:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69492 00:11:57.009 [2024-10-08 16:19:50.160179] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RjTsqsVZ1o 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:58.380 00:11:58.380 real 0m4.944s 00:11:58.380 user 0m6.079s 00:11:58.380 sys 0m0.637s 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.380 16:19:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.380 ************************************ 00:11:58.380 END TEST raid_read_error_test 00:11:58.380 ************************************ 00:11:58.380 16:19:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:58.380 16:19:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:58.380 16:19:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.380 16:19:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.380 ************************************ 00:11:58.380 START TEST raid_write_error_test 00:11:58.380 ************************************ 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IF3MwsMseo 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69642 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69642 00:11:58.380 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69642 ']' 00:11:58.381 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.381 16:19:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:58.381 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.381 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.381 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.381 16:19:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.381 [2024-10-08 16:19:51.595200] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:58.381 [2024-10-08 16:19:51.595361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69642 ] 00:11:58.638 [2024-10-08 16:19:51.760433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.895 [2024-10-08 16:19:52.001361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.895 [2024-10-08 16:19:52.204828] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.895 [2024-10-08 16:19:52.204920] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.460 BaseBdev1_malloc 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.460 true 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.460 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.460 [2024-10-08 16:19:52.722032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:59.460 [2024-10-08 16:19:52.722123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.460 [2024-10-08 16:19:52.722153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:59.460 [2024-10-08 16:19:52.722172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.460 [2024-10-08 16:19:52.725045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.461 [2024-10-08 16:19:52.725097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.461 BaseBdev1 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.461 BaseBdev2_malloc 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.461 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.722 true 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.722 [2024-10-08 16:19:52.797059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:59.722 [2024-10-08 16:19:52.797179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.722 [2024-10-08 16:19:52.797212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:59.722 [2024-10-08 16:19:52.797231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.722 [2024-10-08 16:19:52.800218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.722 [2024-10-08 16:19:52.800274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:59.722 BaseBdev2 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.722 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.723 BaseBdev3_malloc 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.723 true 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.723 [2024-10-08 16:19:52.865708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:59.723 [2024-10-08 16:19:52.865811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.723 [2024-10-08 16:19:52.865844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:59.723 [2024-10-08 16:19:52.865863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.723 [2024-10-08 16:19:52.868865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.723 [2024-10-08 16:19:52.868920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:59.723 BaseBdev3 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.723 [2024-10-08 16:19:52.873861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.723 [2024-10-08 16:19:52.876266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.723 [2024-10-08 16:19:52.876376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.723 [2024-10-08 16:19:52.876697] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.723 [2024-10-08 16:19:52.876717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.723 [2024-10-08 16:19:52.877071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:59.723 [2024-10-08 16:19:52.877304] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.723 [2024-10-08 16:19:52.877327] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:59.723 [2024-10-08 16:19:52.877552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.723 "name": "raid_bdev1", 00:11:59.723 "uuid": "fed8622b-b0c9-4e96-a9f4-6a263f110c50", 00:11:59.723 "strip_size_kb": 0, 00:11:59.723 "state": "online", 00:11:59.723 "raid_level": "raid1", 00:11:59.723 "superblock": true, 00:11:59.723 "num_base_bdevs": 3, 00:11:59.723 "num_base_bdevs_discovered": 3, 00:11:59.723 "num_base_bdevs_operational": 3, 00:11:59.723 "base_bdevs_list": [ 00:11:59.723 { 00:11:59.723 "name": "BaseBdev1", 00:11:59.723 "uuid": "3217f17d-c286-5934-bcf4-17e2200d7396", 00:11:59.723 "is_configured": true, 00:11:59.723 "data_offset": 2048, 00:11:59.723 "data_size": 63488 00:11:59.723 }, 00:11:59.723 { 00:11:59.723 "name": "BaseBdev2", 00:11:59.723 "uuid": "0b382827-23f3-5b8a-9aba-f1865ab5533d", 00:11:59.723 "is_configured": true, 00:11:59.723 "data_offset": 2048, 00:11:59.723 "data_size": 63488 00:11:59.723 }, 00:11:59.723 { 00:11:59.723 "name": "BaseBdev3", 00:11:59.723 "uuid": "e2e0564b-92e0-5de2-b838-9b4d0147c911", 00:11:59.723 "is_configured": true, 00:11:59.723 "data_offset": 2048, 00:11:59.723 "data_size": 63488 00:11:59.723 } 00:11:59.723 ] 00:11:59.723 }' 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.723 16:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.289 16:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:00.289 16:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:00.289 [2024-10-08 16:19:53.491411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.223 [2024-10-08 16:19:54.394748] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:01.223 [2024-10-08 16:19:54.394817] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.223 [2024-10-08 16:19:54.395095] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.223 "name": "raid_bdev1", 00:12:01.223 "uuid": "fed8622b-b0c9-4e96-a9f4-6a263f110c50", 00:12:01.223 "strip_size_kb": 0, 00:12:01.223 "state": "online", 00:12:01.223 "raid_level": "raid1", 00:12:01.223 "superblock": true, 00:12:01.223 "num_base_bdevs": 3, 00:12:01.223 "num_base_bdevs_discovered": 2, 00:12:01.223 "num_base_bdevs_operational": 2, 00:12:01.223 "base_bdevs_list": [ 00:12:01.223 { 00:12:01.223 "name": null, 00:12:01.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.223 "is_configured": false, 00:12:01.223 "data_offset": 0, 00:12:01.223 "data_size": 63488 00:12:01.223 }, 00:12:01.223 { 00:12:01.223 "name": "BaseBdev2", 00:12:01.223 "uuid": "0b382827-23f3-5b8a-9aba-f1865ab5533d", 00:12:01.223 "is_configured": true, 00:12:01.223 "data_offset": 2048, 00:12:01.223 "data_size": 63488 00:12:01.223 }, 00:12:01.223 { 00:12:01.223 "name": "BaseBdev3", 00:12:01.223 "uuid": "e2e0564b-92e0-5de2-b838-9b4d0147c911", 00:12:01.223 "is_configured": true, 00:12:01.223 "data_offset": 2048, 00:12:01.223 "data_size": 63488 00:12:01.223 } 00:12:01.223 ] 00:12:01.223 }' 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.223 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.794 [2024-10-08 16:19:54.934792] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.794 [2024-10-08 16:19:54.934889] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.794 [2024-10-08 16:19:54.941568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.794 [2024-10-08 16:19:54.941743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.794 [2024-10-08 16:19:54.941960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.794 [2024-10-08 16:19:54.941995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:01.794 { 00:12:01.794 "results": [ 00:12:01.794 { 00:12:01.794 "job": "raid_bdev1", 00:12:01.794 "core_mask": "0x1", 00:12:01.794 "workload": "randrw", 00:12:01.794 "percentage": 50, 00:12:01.794 "status": "finished", 00:12:01.794 "queue_depth": 1, 00:12:01.794 "io_size": 131072, 00:12:01.794 "runtime": 1.441059, 00:12:01.794 "iops": 8994.773982189487, 00:12:01.794 "mibps": 1124.3467477736858, 00:12:01.794 "io_failed": 0, 00:12:01.794 "io_timeout": 0, 00:12:01.794 "avg_latency_us": 106.40813566929907, 00:12:01.794 "min_latency_us": 44.45090909090909, 00:12:01.794 "max_latency_us": 1951.1854545454546 00:12:01.794 } 00:12:01.794 ], 00:12:01.794 "core_count": 1 00:12:01.794 } 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69642 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69642 ']' 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69642 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69642 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.794 killing process with pid 69642 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69642' 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69642 00:12:01.794 16:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69642 00:12:01.794 [2024-10-08 16:19:54.983223] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.052 [2024-10-08 16:19:55.204514] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IF3MwsMseo 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:03.425 00:12:03.425 real 0m4.976s 00:12:03.425 user 0m6.141s 00:12:03.425 sys 0m0.607s 00:12:03.425 ************************************ 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.425 16:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.425 END TEST raid_write_error_test 00:12:03.425 ************************************ 00:12:03.425 16:19:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:03.425 16:19:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:03.425 16:19:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:03.425 16:19:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:03.425 16:19:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.425 16:19:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.425 ************************************ 00:12:03.425 START TEST raid_state_function_test 00:12:03.425 ************************************ 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69787 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:03.425 Process raid pid: 69787 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69787' 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69787 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69787 ']' 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.425 16:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.425 [2024-10-08 16:19:56.622254] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:03.425 [2024-10-08 16:19:56.622470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.683 [2024-10-08 16:19:56.792876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.941 [2024-10-08 16:19:57.031391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.941 [2024-10-08 16:19:57.238408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.941 [2024-10-08 16:19:57.238475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 [2024-10-08 16:19:57.576637] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.507 [2024-10-08 16:19:57.576722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.507 [2024-10-08 16:19:57.576738] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.507 [2024-10-08 16:19:57.576757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.507 [2024-10-08 16:19:57.576768] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.507 [2024-10-08 16:19:57.576783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.507 [2024-10-08 16:19:57.576793] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:04.507 [2024-10-08 16:19:57.576808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.507 "name": "Existed_Raid", 00:12:04.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.507 "strip_size_kb": 64, 00:12:04.507 "state": "configuring", 00:12:04.507 "raid_level": "raid0", 00:12:04.507 "superblock": false, 00:12:04.507 "num_base_bdevs": 4, 00:12:04.507 "num_base_bdevs_discovered": 0, 00:12:04.507 "num_base_bdevs_operational": 4, 00:12:04.507 "base_bdevs_list": [ 00:12:04.507 { 00:12:04.507 "name": "BaseBdev1", 00:12:04.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.507 "is_configured": false, 00:12:04.507 "data_offset": 0, 00:12:04.507 "data_size": 0 00:12:04.507 }, 00:12:04.507 { 00:12:04.507 "name": "BaseBdev2", 00:12:04.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.507 "is_configured": false, 00:12:04.507 "data_offset": 0, 00:12:04.507 "data_size": 0 00:12:04.507 }, 00:12:04.507 { 00:12:04.507 "name": "BaseBdev3", 00:12:04.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.507 "is_configured": false, 00:12:04.507 "data_offset": 0, 00:12:04.507 "data_size": 0 00:12:04.507 }, 00:12:04.507 { 00:12:04.507 "name": "BaseBdev4", 00:12:04.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.507 "is_configured": false, 00:12:04.507 "data_offset": 0, 00:12:04.507 "data_size": 0 00:12:04.507 } 00:12:04.507 ] 00:12:04.507 }' 00:12:04.507 16:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.508 16:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.766 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 [2024-10-08 16:19:58.084696] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.766 [2024-10-08 16:19:58.084768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 [2024-10-08 16:19:58.092674] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.024 [2024-10-08 16:19:58.092729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.024 [2024-10-08 16:19:58.092743] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.024 [2024-10-08 16:19:58.092759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.024 [2024-10-08 16:19:58.092770] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.024 [2024-10-08 16:19:58.092784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.024 [2024-10-08 16:19:58.092794] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.024 [2024-10-08 16:19:58.092808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 [2024-10-08 16:19:58.146807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.024 BaseBdev1 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 [ 00:12:05.024 { 00:12:05.024 "name": "BaseBdev1", 00:12:05.024 "aliases": [ 00:12:05.024 "c8efa064-760b-481b-92d8-82aab55d4640" 00:12:05.024 ], 00:12:05.024 "product_name": "Malloc disk", 00:12:05.024 "block_size": 512, 00:12:05.024 "num_blocks": 65536, 00:12:05.024 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:05.024 "assigned_rate_limits": { 00:12:05.024 "rw_ios_per_sec": 0, 00:12:05.024 "rw_mbytes_per_sec": 0, 00:12:05.024 "r_mbytes_per_sec": 0, 00:12:05.024 "w_mbytes_per_sec": 0 00:12:05.024 }, 00:12:05.024 "claimed": true, 00:12:05.024 "claim_type": "exclusive_write", 00:12:05.024 "zoned": false, 00:12:05.024 "supported_io_types": { 00:12:05.024 "read": true, 00:12:05.024 "write": true, 00:12:05.024 "unmap": true, 00:12:05.024 "flush": true, 00:12:05.024 "reset": true, 00:12:05.024 "nvme_admin": false, 00:12:05.024 "nvme_io": false, 00:12:05.024 "nvme_io_md": false, 00:12:05.024 "write_zeroes": true, 00:12:05.024 "zcopy": true, 00:12:05.024 "get_zone_info": false, 00:12:05.024 "zone_management": false, 00:12:05.024 "zone_append": false, 00:12:05.024 "compare": false, 00:12:05.024 "compare_and_write": false, 00:12:05.024 "abort": true, 00:12:05.024 "seek_hole": false, 00:12:05.024 "seek_data": false, 00:12:05.024 "copy": true, 00:12:05.024 "nvme_iov_md": false 00:12:05.024 }, 00:12:05.024 "memory_domains": [ 00:12:05.024 { 00:12:05.024 "dma_device_id": "system", 00:12:05.024 "dma_device_type": 1 00:12:05.024 }, 00:12:05.024 { 00:12:05.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.024 "dma_device_type": 2 00:12:05.024 } 00:12:05.024 ], 00:12:05.024 "driver_specific": {} 00:12:05.024 } 00:12:05.024 ] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.024 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.024 "name": "Existed_Raid", 00:12:05.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.024 "strip_size_kb": 64, 00:12:05.024 "state": "configuring", 00:12:05.024 "raid_level": "raid0", 00:12:05.024 "superblock": false, 00:12:05.024 "num_base_bdevs": 4, 00:12:05.024 "num_base_bdevs_discovered": 1, 00:12:05.024 "num_base_bdevs_operational": 4, 00:12:05.024 "base_bdevs_list": [ 00:12:05.024 { 00:12:05.024 "name": "BaseBdev1", 00:12:05.024 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:05.024 "is_configured": true, 00:12:05.024 "data_offset": 0, 00:12:05.024 "data_size": 65536 00:12:05.024 }, 00:12:05.024 { 00:12:05.024 "name": "BaseBdev2", 00:12:05.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.024 "is_configured": false, 00:12:05.024 "data_offset": 0, 00:12:05.024 "data_size": 0 00:12:05.024 }, 00:12:05.024 { 00:12:05.024 "name": "BaseBdev3", 00:12:05.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.024 "is_configured": false, 00:12:05.024 "data_offset": 0, 00:12:05.024 "data_size": 0 00:12:05.024 }, 00:12:05.024 { 00:12:05.025 "name": "BaseBdev4", 00:12:05.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.025 "is_configured": false, 00:12:05.025 "data_offset": 0, 00:12:05.025 "data_size": 0 00:12:05.025 } 00:12:05.025 ] 00:12:05.025 }' 00:12:05.025 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.025 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.594 [2024-10-08 16:19:58.671012] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.594 [2024-10-08 16:19:58.671098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.594 [2024-10-08 16:19:58.679017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.594 [2024-10-08 16:19:58.681449] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.594 [2024-10-08 16:19:58.681511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.594 [2024-10-08 16:19:58.681544] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.594 [2024-10-08 16:19:58.681565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.594 [2024-10-08 16:19:58.681576] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.594 [2024-10-08 16:19:58.681590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.594 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.595 "name": "Existed_Raid", 00:12:05.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.595 "strip_size_kb": 64, 00:12:05.595 "state": "configuring", 00:12:05.595 "raid_level": "raid0", 00:12:05.595 "superblock": false, 00:12:05.595 "num_base_bdevs": 4, 00:12:05.595 "num_base_bdevs_discovered": 1, 00:12:05.595 "num_base_bdevs_operational": 4, 00:12:05.595 "base_bdevs_list": [ 00:12:05.595 { 00:12:05.595 "name": "BaseBdev1", 00:12:05.595 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:05.595 "is_configured": true, 00:12:05.595 "data_offset": 0, 00:12:05.595 "data_size": 65536 00:12:05.595 }, 00:12:05.595 { 00:12:05.595 "name": "BaseBdev2", 00:12:05.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.595 "is_configured": false, 00:12:05.595 "data_offset": 0, 00:12:05.595 "data_size": 0 00:12:05.595 }, 00:12:05.595 { 00:12:05.595 "name": "BaseBdev3", 00:12:05.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.595 "is_configured": false, 00:12:05.595 "data_offset": 0, 00:12:05.595 "data_size": 0 00:12:05.595 }, 00:12:05.595 { 00:12:05.595 "name": "BaseBdev4", 00:12:05.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.595 "is_configured": false, 00:12:05.595 "data_offset": 0, 00:12:05.595 "data_size": 0 00:12:05.595 } 00:12:05.595 ] 00:12:05.595 }' 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.595 16:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 [2024-10-08 16:19:59.245596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.160 BaseBdev2 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.160 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 [ 00:12:06.160 { 00:12:06.160 "name": "BaseBdev2", 00:12:06.160 "aliases": [ 00:12:06.161 "b661a7a6-5e36-4a49-9aa6-40855d0a596a" 00:12:06.161 ], 00:12:06.161 "product_name": "Malloc disk", 00:12:06.161 "block_size": 512, 00:12:06.161 "num_blocks": 65536, 00:12:06.161 "uuid": "b661a7a6-5e36-4a49-9aa6-40855d0a596a", 00:12:06.161 "assigned_rate_limits": { 00:12:06.161 "rw_ios_per_sec": 0, 00:12:06.161 "rw_mbytes_per_sec": 0, 00:12:06.161 "r_mbytes_per_sec": 0, 00:12:06.161 "w_mbytes_per_sec": 0 00:12:06.161 }, 00:12:06.161 "claimed": true, 00:12:06.161 "claim_type": "exclusive_write", 00:12:06.161 "zoned": false, 00:12:06.161 "supported_io_types": { 00:12:06.161 "read": true, 00:12:06.161 "write": true, 00:12:06.161 "unmap": true, 00:12:06.161 "flush": true, 00:12:06.161 "reset": true, 00:12:06.161 "nvme_admin": false, 00:12:06.161 "nvme_io": false, 00:12:06.161 "nvme_io_md": false, 00:12:06.161 "write_zeroes": true, 00:12:06.161 "zcopy": true, 00:12:06.161 "get_zone_info": false, 00:12:06.161 "zone_management": false, 00:12:06.161 "zone_append": false, 00:12:06.161 "compare": false, 00:12:06.161 "compare_and_write": false, 00:12:06.161 "abort": true, 00:12:06.161 "seek_hole": false, 00:12:06.161 "seek_data": false, 00:12:06.161 "copy": true, 00:12:06.161 "nvme_iov_md": false 00:12:06.161 }, 00:12:06.161 "memory_domains": [ 00:12:06.161 { 00:12:06.161 "dma_device_id": "system", 00:12:06.161 "dma_device_type": 1 00:12:06.161 }, 00:12:06.161 { 00:12:06.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.161 "dma_device_type": 2 00:12:06.161 } 00:12:06.161 ], 00:12:06.161 "driver_specific": {} 00:12:06.161 } 00:12:06.161 ] 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.161 "name": "Existed_Raid", 00:12:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.161 "strip_size_kb": 64, 00:12:06.161 "state": "configuring", 00:12:06.161 "raid_level": "raid0", 00:12:06.161 "superblock": false, 00:12:06.161 "num_base_bdevs": 4, 00:12:06.161 "num_base_bdevs_discovered": 2, 00:12:06.161 "num_base_bdevs_operational": 4, 00:12:06.161 "base_bdevs_list": [ 00:12:06.161 { 00:12:06.161 "name": "BaseBdev1", 00:12:06.161 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:06.161 "is_configured": true, 00:12:06.161 "data_offset": 0, 00:12:06.161 "data_size": 65536 00:12:06.161 }, 00:12:06.161 { 00:12:06.161 "name": "BaseBdev2", 00:12:06.161 "uuid": "b661a7a6-5e36-4a49-9aa6-40855d0a596a", 00:12:06.161 "is_configured": true, 00:12:06.161 "data_offset": 0, 00:12:06.161 "data_size": 65536 00:12:06.161 }, 00:12:06.161 { 00:12:06.161 "name": "BaseBdev3", 00:12:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.161 "is_configured": false, 00:12:06.161 "data_offset": 0, 00:12:06.161 "data_size": 0 00:12:06.161 }, 00:12:06.161 { 00:12:06.161 "name": "BaseBdev4", 00:12:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.161 "is_configured": false, 00:12:06.161 "data_offset": 0, 00:12:06.161 "data_size": 0 00:12:06.161 } 00:12:06.161 ] 00:12:06.161 }' 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.161 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 [2024-10-08 16:19:59.805263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.727 BaseBdev3 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 [ 00:12:06.727 { 00:12:06.727 "name": "BaseBdev3", 00:12:06.727 "aliases": [ 00:12:06.727 "fafb35db-51ef-42a1-81f9-023a12182682" 00:12:06.727 ], 00:12:06.727 "product_name": "Malloc disk", 00:12:06.727 "block_size": 512, 00:12:06.727 "num_blocks": 65536, 00:12:06.727 "uuid": "fafb35db-51ef-42a1-81f9-023a12182682", 00:12:06.727 "assigned_rate_limits": { 00:12:06.727 "rw_ios_per_sec": 0, 00:12:06.727 "rw_mbytes_per_sec": 0, 00:12:06.727 "r_mbytes_per_sec": 0, 00:12:06.727 "w_mbytes_per_sec": 0 00:12:06.727 }, 00:12:06.727 "claimed": true, 00:12:06.727 "claim_type": "exclusive_write", 00:12:06.727 "zoned": false, 00:12:06.727 "supported_io_types": { 00:12:06.727 "read": true, 00:12:06.727 "write": true, 00:12:06.727 "unmap": true, 00:12:06.727 "flush": true, 00:12:06.727 "reset": true, 00:12:06.727 "nvme_admin": false, 00:12:06.727 "nvme_io": false, 00:12:06.727 "nvme_io_md": false, 00:12:06.727 "write_zeroes": true, 00:12:06.727 "zcopy": true, 00:12:06.727 "get_zone_info": false, 00:12:06.727 "zone_management": false, 00:12:06.727 "zone_append": false, 00:12:06.727 "compare": false, 00:12:06.727 "compare_and_write": false, 00:12:06.727 "abort": true, 00:12:06.727 "seek_hole": false, 00:12:06.727 "seek_data": false, 00:12:06.727 "copy": true, 00:12:06.727 "nvme_iov_md": false 00:12:06.727 }, 00:12:06.727 "memory_domains": [ 00:12:06.727 { 00:12:06.727 "dma_device_id": "system", 00:12:06.727 "dma_device_type": 1 00:12:06.727 }, 00:12:06.727 { 00:12:06.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.727 "dma_device_type": 2 00:12:06.727 } 00:12:06.727 ], 00:12:06.727 "driver_specific": {} 00:12:06.727 } 00:12:06.727 ] 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.727 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.728 "name": "Existed_Raid", 00:12:06.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.728 "strip_size_kb": 64, 00:12:06.728 "state": "configuring", 00:12:06.728 "raid_level": "raid0", 00:12:06.728 "superblock": false, 00:12:06.728 "num_base_bdevs": 4, 00:12:06.728 "num_base_bdevs_discovered": 3, 00:12:06.728 "num_base_bdevs_operational": 4, 00:12:06.728 "base_bdevs_list": [ 00:12:06.728 { 00:12:06.728 "name": "BaseBdev1", 00:12:06.728 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:06.728 "is_configured": true, 00:12:06.728 "data_offset": 0, 00:12:06.728 "data_size": 65536 00:12:06.728 }, 00:12:06.728 { 00:12:06.728 "name": "BaseBdev2", 00:12:06.728 "uuid": "b661a7a6-5e36-4a49-9aa6-40855d0a596a", 00:12:06.728 "is_configured": true, 00:12:06.728 "data_offset": 0, 00:12:06.728 "data_size": 65536 00:12:06.728 }, 00:12:06.728 { 00:12:06.728 "name": "BaseBdev3", 00:12:06.728 "uuid": "fafb35db-51ef-42a1-81f9-023a12182682", 00:12:06.728 "is_configured": true, 00:12:06.728 "data_offset": 0, 00:12:06.728 "data_size": 65536 00:12:06.728 }, 00:12:06.728 { 00:12:06.728 "name": "BaseBdev4", 00:12:06.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.728 "is_configured": false, 00:12:06.728 "data_offset": 0, 00:12:06.728 "data_size": 0 00:12:06.728 } 00:12:06.728 ] 00:12:06.728 }' 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.728 16:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.294 [2024-10-08 16:20:00.400769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.294 [2024-10-08 16:20:00.400832] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:07.294 [2024-10-08 16:20:00.400851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:07.294 [2024-10-08 16:20:00.401179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:07.294 [2024-10-08 16:20:00.401364] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:07.294 [2024-10-08 16:20:00.401388] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:07.294 [2024-10-08 16:20:00.401676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.294 BaseBdev4 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.294 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.294 [ 00:12:07.294 { 00:12:07.294 "name": "BaseBdev4", 00:12:07.294 "aliases": [ 00:12:07.294 "27d719ee-648a-4cc5-a4e8-71b388279fdc" 00:12:07.294 ], 00:12:07.294 "product_name": "Malloc disk", 00:12:07.295 "block_size": 512, 00:12:07.295 "num_blocks": 65536, 00:12:07.295 "uuid": "27d719ee-648a-4cc5-a4e8-71b388279fdc", 00:12:07.295 "assigned_rate_limits": { 00:12:07.295 "rw_ios_per_sec": 0, 00:12:07.295 "rw_mbytes_per_sec": 0, 00:12:07.295 "r_mbytes_per_sec": 0, 00:12:07.295 "w_mbytes_per_sec": 0 00:12:07.295 }, 00:12:07.295 "claimed": true, 00:12:07.295 "claim_type": "exclusive_write", 00:12:07.295 "zoned": false, 00:12:07.295 "supported_io_types": { 00:12:07.295 "read": true, 00:12:07.295 "write": true, 00:12:07.295 "unmap": true, 00:12:07.295 "flush": true, 00:12:07.295 "reset": true, 00:12:07.295 "nvme_admin": false, 00:12:07.295 "nvme_io": false, 00:12:07.295 "nvme_io_md": false, 00:12:07.295 "write_zeroes": true, 00:12:07.295 "zcopy": true, 00:12:07.295 "get_zone_info": false, 00:12:07.295 "zone_management": false, 00:12:07.295 "zone_append": false, 00:12:07.295 "compare": false, 00:12:07.295 "compare_and_write": false, 00:12:07.295 "abort": true, 00:12:07.295 "seek_hole": false, 00:12:07.295 "seek_data": false, 00:12:07.295 "copy": true, 00:12:07.295 "nvme_iov_md": false 00:12:07.295 }, 00:12:07.295 "memory_domains": [ 00:12:07.295 { 00:12:07.295 "dma_device_id": "system", 00:12:07.295 "dma_device_type": 1 00:12:07.295 }, 00:12:07.295 { 00:12:07.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.295 "dma_device_type": 2 00:12:07.295 } 00:12:07.295 ], 00:12:07.295 "driver_specific": {} 00:12:07.295 } 00:12:07.295 ] 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.295 "name": "Existed_Raid", 00:12:07.295 "uuid": "65233307-d75f-4a52-ba73-e505ba1bcb79", 00:12:07.295 "strip_size_kb": 64, 00:12:07.295 "state": "online", 00:12:07.295 "raid_level": "raid0", 00:12:07.295 "superblock": false, 00:12:07.295 "num_base_bdevs": 4, 00:12:07.295 "num_base_bdevs_discovered": 4, 00:12:07.295 "num_base_bdevs_operational": 4, 00:12:07.295 "base_bdevs_list": [ 00:12:07.295 { 00:12:07.295 "name": "BaseBdev1", 00:12:07.295 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:07.295 "is_configured": true, 00:12:07.295 "data_offset": 0, 00:12:07.295 "data_size": 65536 00:12:07.295 }, 00:12:07.295 { 00:12:07.295 "name": "BaseBdev2", 00:12:07.295 "uuid": "b661a7a6-5e36-4a49-9aa6-40855d0a596a", 00:12:07.295 "is_configured": true, 00:12:07.295 "data_offset": 0, 00:12:07.295 "data_size": 65536 00:12:07.295 }, 00:12:07.295 { 00:12:07.295 "name": "BaseBdev3", 00:12:07.295 "uuid": "fafb35db-51ef-42a1-81f9-023a12182682", 00:12:07.295 "is_configured": true, 00:12:07.295 "data_offset": 0, 00:12:07.295 "data_size": 65536 00:12:07.295 }, 00:12:07.295 { 00:12:07.295 "name": "BaseBdev4", 00:12:07.295 "uuid": "27d719ee-648a-4cc5-a4e8-71b388279fdc", 00:12:07.295 "is_configured": true, 00:12:07.295 "data_offset": 0, 00:12:07.295 "data_size": 65536 00:12:07.295 } 00:12:07.295 ] 00:12:07.295 }' 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.295 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.915 [2024-10-08 16:20:00.893404] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.915 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.915 "name": "Existed_Raid", 00:12:07.915 "aliases": [ 00:12:07.915 "65233307-d75f-4a52-ba73-e505ba1bcb79" 00:12:07.915 ], 00:12:07.915 "product_name": "Raid Volume", 00:12:07.915 "block_size": 512, 00:12:07.915 "num_blocks": 262144, 00:12:07.915 "uuid": "65233307-d75f-4a52-ba73-e505ba1bcb79", 00:12:07.915 "assigned_rate_limits": { 00:12:07.915 "rw_ios_per_sec": 0, 00:12:07.915 "rw_mbytes_per_sec": 0, 00:12:07.915 "r_mbytes_per_sec": 0, 00:12:07.915 "w_mbytes_per_sec": 0 00:12:07.915 }, 00:12:07.915 "claimed": false, 00:12:07.915 "zoned": false, 00:12:07.915 "supported_io_types": { 00:12:07.915 "read": true, 00:12:07.915 "write": true, 00:12:07.915 "unmap": true, 00:12:07.915 "flush": true, 00:12:07.915 "reset": true, 00:12:07.915 "nvme_admin": false, 00:12:07.915 "nvme_io": false, 00:12:07.915 "nvme_io_md": false, 00:12:07.915 "write_zeroes": true, 00:12:07.915 "zcopy": false, 00:12:07.915 "get_zone_info": false, 00:12:07.915 "zone_management": false, 00:12:07.915 "zone_append": false, 00:12:07.915 "compare": false, 00:12:07.915 "compare_and_write": false, 00:12:07.915 "abort": false, 00:12:07.915 "seek_hole": false, 00:12:07.915 "seek_data": false, 00:12:07.915 "copy": false, 00:12:07.915 "nvme_iov_md": false 00:12:07.915 }, 00:12:07.915 "memory_domains": [ 00:12:07.915 { 00:12:07.915 "dma_device_id": "system", 00:12:07.915 "dma_device_type": 1 00:12:07.915 }, 00:12:07.915 { 00:12:07.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.915 "dma_device_type": 2 00:12:07.915 }, 00:12:07.915 { 00:12:07.915 "dma_device_id": "system", 00:12:07.915 "dma_device_type": 1 00:12:07.915 }, 00:12:07.915 { 00:12:07.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.915 "dma_device_type": 2 00:12:07.915 }, 00:12:07.915 { 00:12:07.915 "dma_device_id": "system", 00:12:07.915 "dma_device_type": 1 00:12:07.915 }, 00:12:07.915 { 00:12:07.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.915 "dma_device_type": 2 00:12:07.915 }, 00:12:07.915 { 00:12:07.915 "dma_device_id": "system", 00:12:07.915 "dma_device_type": 1 00:12:07.915 }, 00:12:07.916 { 00:12:07.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.916 "dma_device_type": 2 00:12:07.916 } 00:12:07.916 ], 00:12:07.916 "driver_specific": { 00:12:07.916 "raid": { 00:12:07.916 "uuid": "65233307-d75f-4a52-ba73-e505ba1bcb79", 00:12:07.916 "strip_size_kb": 64, 00:12:07.916 "state": "online", 00:12:07.916 "raid_level": "raid0", 00:12:07.916 "superblock": false, 00:12:07.916 "num_base_bdevs": 4, 00:12:07.916 "num_base_bdevs_discovered": 4, 00:12:07.916 "num_base_bdevs_operational": 4, 00:12:07.916 "base_bdevs_list": [ 00:12:07.916 { 00:12:07.916 "name": "BaseBdev1", 00:12:07.916 "uuid": "c8efa064-760b-481b-92d8-82aab55d4640", 00:12:07.916 "is_configured": true, 00:12:07.916 "data_offset": 0, 00:12:07.916 "data_size": 65536 00:12:07.916 }, 00:12:07.916 { 00:12:07.916 "name": "BaseBdev2", 00:12:07.916 "uuid": "b661a7a6-5e36-4a49-9aa6-40855d0a596a", 00:12:07.916 "is_configured": true, 00:12:07.916 "data_offset": 0, 00:12:07.916 "data_size": 65536 00:12:07.916 }, 00:12:07.916 { 00:12:07.916 "name": "BaseBdev3", 00:12:07.916 "uuid": "fafb35db-51ef-42a1-81f9-023a12182682", 00:12:07.916 "is_configured": true, 00:12:07.916 "data_offset": 0, 00:12:07.916 "data_size": 65536 00:12:07.916 }, 00:12:07.916 { 00:12:07.916 "name": "BaseBdev4", 00:12:07.916 "uuid": "27d719ee-648a-4cc5-a4e8-71b388279fdc", 00:12:07.916 "is_configured": true, 00:12:07.916 "data_offset": 0, 00:12:07.916 "data_size": 65536 00:12:07.916 } 00:12:07.916 ] 00:12:07.916 } 00:12:07.916 } 00:12:07.916 }' 00:12:07.916 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.916 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:07.916 BaseBdev2 00:12:07.916 BaseBdev3 00:12:07.916 BaseBdev4' 00:12:07.916 16:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.916 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.175 [2024-10-08 16:20:01.261203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.175 [2024-10-08 16:20:01.261475] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.175 [2024-10-08 16:20:01.261586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.175 "name": "Existed_Raid", 00:12:08.175 "uuid": "65233307-d75f-4a52-ba73-e505ba1bcb79", 00:12:08.175 "strip_size_kb": 64, 00:12:08.175 "state": "offline", 00:12:08.175 "raid_level": "raid0", 00:12:08.175 "superblock": false, 00:12:08.175 "num_base_bdevs": 4, 00:12:08.175 "num_base_bdevs_discovered": 3, 00:12:08.175 "num_base_bdevs_operational": 3, 00:12:08.175 "base_bdevs_list": [ 00:12:08.175 { 00:12:08.175 "name": null, 00:12:08.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.175 "is_configured": false, 00:12:08.175 "data_offset": 0, 00:12:08.175 "data_size": 65536 00:12:08.175 }, 00:12:08.175 { 00:12:08.175 "name": "BaseBdev2", 00:12:08.175 "uuid": "b661a7a6-5e36-4a49-9aa6-40855d0a596a", 00:12:08.175 "is_configured": true, 00:12:08.175 "data_offset": 0, 00:12:08.175 "data_size": 65536 00:12:08.175 }, 00:12:08.175 { 00:12:08.175 "name": "BaseBdev3", 00:12:08.175 "uuid": "fafb35db-51ef-42a1-81f9-023a12182682", 00:12:08.175 "is_configured": true, 00:12:08.175 "data_offset": 0, 00:12:08.175 "data_size": 65536 00:12:08.175 }, 00:12:08.175 { 00:12:08.175 "name": "BaseBdev4", 00:12:08.175 "uuid": "27d719ee-648a-4cc5-a4e8-71b388279fdc", 00:12:08.175 "is_configured": true, 00:12:08.175 "data_offset": 0, 00:12:08.175 "data_size": 65536 00:12:08.175 } 00:12:08.175 ] 00:12:08.175 }' 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.175 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.740 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.741 [2024-10-08 16:20:01.910563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.741 16:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.741 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.741 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.741 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.741 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:08.741 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.741 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.741 [2024-10-08 16:20:02.039137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.998 [2024-10-08 16:20:02.176162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:08.998 [2024-10-08 16:20:02.176249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.998 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 BaseBdev2 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 [ 00:12:09.257 { 00:12:09.257 "name": "BaseBdev2", 00:12:09.257 "aliases": [ 00:12:09.257 "4f9214cc-e5c6-487f-8e3f-4a178e41bebb" 00:12:09.257 ], 00:12:09.257 "product_name": "Malloc disk", 00:12:09.257 "block_size": 512, 00:12:09.257 "num_blocks": 65536, 00:12:09.257 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:09.257 "assigned_rate_limits": { 00:12:09.257 "rw_ios_per_sec": 0, 00:12:09.257 "rw_mbytes_per_sec": 0, 00:12:09.257 "r_mbytes_per_sec": 0, 00:12:09.257 "w_mbytes_per_sec": 0 00:12:09.257 }, 00:12:09.257 "claimed": false, 00:12:09.257 "zoned": false, 00:12:09.257 "supported_io_types": { 00:12:09.257 "read": true, 00:12:09.257 "write": true, 00:12:09.257 "unmap": true, 00:12:09.257 "flush": true, 00:12:09.257 "reset": true, 00:12:09.257 "nvme_admin": false, 00:12:09.257 "nvme_io": false, 00:12:09.257 "nvme_io_md": false, 00:12:09.257 "write_zeroes": true, 00:12:09.257 "zcopy": true, 00:12:09.257 "get_zone_info": false, 00:12:09.257 "zone_management": false, 00:12:09.257 "zone_append": false, 00:12:09.257 "compare": false, 00:12:09.257 "compare_and_write": false, 00:12:09.257 "abort": true, 00:12:09.257 "seek_hole": false, 00:12:09.257 "seek_data": false, 00:12:09.257 "copy": true, 00:12:09.257 "nvme_iov_md": false 00:12:09.257 }, 00:12:09.257 "memory_domains": [ 00:12:09.257 { 00:12:09.257 "dma_device_id": "system", 00:12:09.257 "dma_device_type": 1 00:12:09.257 }, 00:12:09.257 { 00:12:09.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.257 "dma_device_type": 2 00:12:09.257 } 00:12:09.257 ], 00:12:09.257 "driver_specific": {} 00:12:09.257 } 00:12:09.257 ] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 BaseBdev3 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 [ 00:12:09.257 { 00:12:09.257 "name": "BaseBdev3", 00:12:09.257 "aliases": [ 00:12:09.257 "9b13aa4e-3c2a-4df7-a0be-3639d7052b70" 00:12:09.257 ], 00:12:09.257 "product_name": "Malloc disk", 00:12:09.257 "block_size": 512, 00:12:09.257 "num_blocks": 65536, 00:12:09.257 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:09.257 "assigned_rate_limits": { 00:12:09.257 "rw_ios_per_sec": 0, 00:12:09.257 "rw_mbytes_per_sec": 0, 00:12:09.257 "r_mbytes_per_sec": 0, 00:12:09.257 "w_mbytes_per_sec": 0 00:12:09.257 }, 00:12:09.257 "claimed": false, 00:12:09.257 "zoned": false, 00:12:09.257 "supported_io_types": { 00:12:09.257 "read": true, 00:12:09.257 "write": true, 00:12:09.257 "unmap": true, 00:12:09.257 "flush": true, 00:12:09.257 "reset": true, 00:12:09.257 "nvme_admin": false, 00:12:09.257 "nvme_io": false, 00:12:09.257 "nvme_io_md": false, 00:12:09.257 "write_zeroes": true, 00:12:09.257 "zcopy": true, 00:12:09.257 "get_zone_info": false, 00:12:09.257 "zone_management": false, 00:12:09.257 "zone_append": false, 00:12:09.257 "compare": false, 00:12:09.257 "compare_and_write": false, 00:12:09.257 "abort": true, 00:12:09.257 "seek_hole": false, 00:12:09.257 "seek_data": false, 00:12:09.257 "copy": true, 00:12:09.257 "nvme_iov_md": false 00:12:09.257 }, 00:12:09.257 "memory_domains": [ 00:12:09.257 { 00:12:09.257 "dma_device_id": "system", 00:12:09.257 "dma_device_type": 1 00:12:09.257 }, 00:12:09.257 { 00:12:09.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.257 "dma_device_type": 2 00:12:09.257 } 00:12:09.257 ], 00:12:09.257 "driver_specific": {} 00:12:09.257 } 00:12:09.257 ] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 BaseBdev4 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.257 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 [ 00:12:09.257 { 00:12:09.257 "name": "BaseBdev4", 00:12:09.257 "aliases": [ 00:12:09.257 "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33" 00:12:09.257 ], 00:12:09.257 "product_name": "Malloc disk", 00:12:09.257 "block_size": 512, 00:12:09.257 "num_blocks": 65536, 00:12:09.257 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:09.257 "assigned_rate_limits": { 00:12:09.257 "rw_ios_per_sec": 0, 00:12:09.257 "rw_mbytes_per_sec": 0, 00:12:09.257 "r_mbytes_per_sec": 0, 00:12:09.257 "w_mbytes_per_sec": 0 00:12:09.257 }, 00:12:09.257 "claimed": false, 00:12:09.257 "zoned": false, 00:12:09.257 "supported_io_types": { 00:12:09.257 "read": true, 00:12:09.257 "write": true, 00:12:09.257 "unmap": true, 00:12:09.257 "flush": true, 00:12:09.257 "reset": true, 00:12:09.257 "nvme_admin": false, 00:12:09.257 "nvme_io": false, 00:12:09.257 "nvme_io_md": false, 00:12:09.257 "write_zeroes": true, 00:12:09.257 "zcopy": true, 00:12:09.257 "get_zone_info": false, 00:12:09.257 "zone_management": false, 00:12:09.257 "zone_append": false, 00:12:09.258 "compare": false, 00:12:09.258 "compare_and_write": false, 00:12:09.258 "abort": true, 00:12:09.258 "seek_hole": false, 00:12:09.258 "seek_data": false, 00:12:09.258 "copy": true, 00:12:09.258 "nvme_iov_md": false 00:12:09.258 }, 00:12:09.258 "memory_domains": [ 00:12:09.258 { 00:12:09.258 "dma_device_id": "system", 00:12:09.258 "dma_device_type": 1 00:12:09.258 }, 00:12:09.258 { 00:12:09.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.258 "dma_device_type": 2 00:12:09.258 } 00:12:09.258 ], 00:12:09.258 "driver_specific": {} 00:12:09.258 } 00:12:09.258 ] 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.258 [2024-10-08 16:20:02.558111] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.258 [2024-10-08 16:20:02.558186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.258 [2024-10-08 16:20:02.558228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.258 [2024-10-08 16:20:02.560719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.258 [2024-10-08 16:20:02.560788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.258 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.516 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.516 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.516 "name": "Existed_Raid", 00:12:09.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.516 "strip_size_kb": 64, 00:12:09.516 "state": "configuring", 00:12:09.516 "raid_level": "raid0", 00:12:09.516 "superblock": false, 00:12:09.516 "num_base_bdevs": 4, 00:12:09.516 "num_base_bdevs_discovered": 3, 00:12:09.516 "num_base_bdevs_operational": 4, 00:12:09.516 "base_bdevs_list": [ 00:12:09.516 { 00:12:09.516 "name": "BaseBdev1", 00:12:09.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.516 "is_configured": false, 00:12:09.517 "data_offset": 0, 00:12:09.517 "data_size": 0 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "name": "BaseBdev2", 00:12:09.517 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:09.517 "is_configured": true, 00:12:09.517 "data_offset": 0, 00:12:09.517 "data_size": 65536 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "name": "BaseBdev3", 00:12:09.517 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:09.517 "is_configured": true, 00:12:09.517 "data_offset": 0, 00:12:09.517 "data_size": 65536 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "name": "BaseBdev4", 00:12:09.517 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:09.517 "is_configured": true, 00:12:09.517 "data_offset": 0, 00:12:09.517 "data_size": 65536 00:12:09.517 } 00:12:09.517 ] 00:12:09.517 }' 00:12:09.517 16:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.517 16:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.776 [2024-10-08 16:20:03.050281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.776 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.037 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.037 "name": "Existed_Raid", 00:12:10.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.037 "strip_size_kb": 64, 00:12:10.037 "state": "configuring", 00:12:10.037 "raid_level": "raid0", 00:12:10.037 "superblock": false, 00:12:10.037 "num_base_bdevs": 4, 00:12:10.037 "num_base_bdevs_discovered": 2, 00:12:10.037 "num_base_bdevs_operational": 4, 00:12:10.037 "base_bdevs_list": [ 00:12:10.037 { 00:12:10.037 "name": "BaseBdev1", 00:12:10.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.037 "is_configured": false, 00:12:10.037 "data_offset": 0, 00:12:10.037 "data_size": 0 00:12:10.037 }, 00:12:10.037 { 00:12:10.037 "name": null, 00:12:10.037 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:10.037 "is_configured": false, 00:12:10.037 "data_offset": 0, 00:12:10.037 "data_size": 65536 00:12:10.037 }, 00:12:10.037 { 00:12:10.037 "name": "BaseBdev3", 00:12:10.037 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:10.037 "is_configured": true, 00:12:10.037 "data_offset": 0, 00:12:10.037 "data_size": 65536 00:12:10.037 }, 00:12:10.037 { 00:12:10.037 "name": "BaseBdev4", 00:12:10.037 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:10.037 "is_configured": true, 00:12:10.037 "data_offset": 0, 00:12:10.037 "data_size": 65536 00:12:10.037 } 00:12:10.037 ] 00:12:10.037 }' 00:12:10.037 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.037 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.296 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.553 [2024-10-08 16:20:03.656333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.553 BaseBdev1 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.553 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.553 [ 00:12:10.553 { 00:12:10.553 "name": "BaseBdev1", 00:12:10.553 "aliases": [ 00:12:10.553 "e288cac1-c774-486d-bde2-bfd72e0ce7d2" 00:12:10.553 ], 00:12:10.553 "product_name": "Malloc disk", 00:12:10.553 "block_size": 512, 00:12:10.554 "num_blocks": 65536, 00:12:10.554 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:10.554 "assigned_rate_limits": { 00:12:10.554 "rw_ios_per_sec": 0, 00:12:10.554 "rw_mbytes_per_sec": 0, 00:12:10.554 "r_mbytes_per_sec": 0, 00:12:10.554 "w_mbytes_per_sec": 0 00:12:10.554 }, 00:12:10.554 "claimed": true, 00:12:10.554 "claim_type": "exclusive_write", 00:12:10.554 "zoned": false, 00:12:10.554 "supported_io_types": { 00:12:10.554 "read": true, 00:12:10.554 "write": true, 00:12:10.554 "unmap": true, 00:12:10.554 "flush": true, 00:12:10.554 "reset": true, 00:12:10.554 "nvme_admin": false, 00:12:10.554 "nvme_io": false, 00:12:10.554 "nvme_io_md": false, 00:12:10.554 "write_zeroes": true, 00:12:10.554 "zcopy": true, 00:12:10.554 "get_zone_info": false, 00:12:10.554 "zone_management": false, 00:12:10.554 "zone_append": false, 00:12:10.554 "compare": false, 00:12:10.554 "compare_and_write": false, 00:12:10.554 "abort": true, 00:12:10.554 "seek_hole": false, 00:12:10.554 "seek_data": false, 00:12:10.554 "copy": true, 00:12:10.554 "nvme_iov_md": false 00:12:10.554 }, 00:12:10.554 "memory_domains": [ 00:12:10.554 { 00:12:10.554 "dma_device_id": "system", 00:12:10.554 "dma_device_type": 1 00:12:10.554 }, 00:12:10.554 { 00:12:10.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.554 "dma_device_type": 2 00:12:10.554 } 00:12:10.554 ], 00:12:10.554 "driver_specific": {} 00:12:10.554 } 00:12:10.554 ] 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.554 "name": "Existed_Raid", 00:12:10.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.554 "strip_size_kb": 64, 00:12:10.554 "state": "configuring", 00:12:10.554 "raid_level": "raid0", 00:12:10.554 "superblock": false, 00:12:10.554 "num_base_bdevs": 4, 00:12:10.554 "num_base_bdevs_discovered": 3, 00:12:10.554 "num_base_bdevs_operational": 4, 00:12:10.554 "base_bdevs_list": [ 00:12:10.554 { 00:12:10.554 "name": "BaseBdev1", 00:12:10.554 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:10.554 "is_configured": true, 00:12:10.554 "data_offset": 0, 00:12:10.554 "data_size": 65536 00:12:10.554 }, 00:12:10.554 { 00:12:10.554 "name": null, 00:12:10.554 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:10.554 "is_configured": false, 00:12:10.554 "data_offset": 0, 00:12:10.554 "data_size": 65536 00:12:10.554 }, 00:12:10.554 { 00:12:10.554 "name": "BaseBdev3", 00:12:10.554 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:10.554 "is_configured": true, 00:12:10.554 "data_offset": 0, 00:12:10.554 "data_size": 65536 00:12:10.554 }, 00:12:10.554 { 00:12:10.554 "name": "BaseBdev4", 00:12:10.554 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:10.554 "is_configured": true, 00:12:10.554 "data_offset": 0, 00:12:10.554 "data_size": 65536 00:12:10.554 } 00:12:10.554 ] 00:12:10.554 }' 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.554 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.120 [2024-10-08 16:20:04.268649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.120 "name": "Existed_Raid", 00:12:11.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.120 "strip_size_kb": 64, 00:12:11.120 "state": "configuring", 00:12:11.120 "raid_level": "raid0", 00:12:11.120 "superblock": false, 00:12:11.120 "num_base_bdevs": 4, 00:12:11.120 "num_base_bdevs_discovered": 2, 00:12:11.120 "num_base_bdevs_operational": 4, 00:12:11.120 "base_bdevs_list": [ 00:12:11.120 { 00:12:11.120 "name": "BaseBdev1", 00:12:11.120 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:11.120 "is_configured": true, 00:12:11.120 "data_offset": 0, 00:12:11.120 "data_size": 65536 00:12:11.120 }, 00:12:11.120 { 00:12:11.120 "name": null, 00:12:11.120 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:11.120 "is_configured": false, 00:12:11.120 "data_offset": 0, 00:12:11.120 "data_size": 65536 00:12:11.120 }, 00:12:11.120 { 00:12:11.120 "name": null, 00:12:11.120 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:11.120 "is_configured": false, 00:12:11.120 "data_offset": 0, 00:12:11.120 "data_size": 65536 00:12:11.120 }, 00:12:11.120 { 00:12:11.120 "name": "BaseBdev4", 00:12:11.120 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:11.120 "is_configured": true, 00:12:11.120 "data_offset": 0, 00:12:11.120 "data_size": 65536 00:12:11.120 } 00:12:11.120 ] 00:12:11.120 }' 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.120 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.686 [2024-10-08 16:20:04.840805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.686 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.686 "name": "Existed_Raid", 00:12:11.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.686 "strip_size_kb": 64, 00:12:11.686 "state": "configuring", 00:12:11.686 "raid_level": "raid0", 00:12:11.686 "superblock": false, 00:12:11.686 "num_base_bdevs": 4, 00:12:11.686 "num_base_bdevs_discovered": 3, 00:12:11.686 "num_base_bdevs_operational": 4, 00:12:11.686 "base_bdevs_list": [ 00:12:11.686 { 00:12:11.686 "name": "BaseBdev1", 00:12:11.686 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:11.686 "is_configured": true, 00:12:11.686 "data_offset": 0, 00:12:11.687 "data_size": 65536 00:12:11.687 }, 00:12:11.687 { 00:12:11.687 "name": null, 00:12:11.687 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:11.687 "is_configured": false, 00:12:11.687 "data_offset": 0, 00:12:11.687 "data_size": 65536 00:12:11.687 }, 00:12:11.687 { 00:12:11.687 "name": "BaseBdev3", 00:12:11.687 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:11.687 "is_configured": true, 00:12:11.687 "data_offset": 0, 00:12:11.687 "data_size": 65536 00:12:11.687 }, 00:12:11.687 { 00:12:11.687 "name": "BaseBdev4", 00:12:11.687 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:11.687 "is_configured": true, 00:12:11.687 "data_offset": 0, 00:12:11.687 "data_size": 65536 00:12:11.687 } 00:12:11.687 ] 00:12:11.687 }' 00:12:11.687 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.687 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.253 [2024-10-08 16:20:05.409001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.253 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.253 "name": "Existed_Raid", 00:12:12.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.253 "strip_size_kb": 64, 00:12:12.253 "state": "configuring", 00:12:12.253 "raid_level": "raid0", 00:12:12.253 "superblock": false, 00:12:12.253 "num_base_bdevs": 4, 00:12:12.253 "num_base_bdevs_discovered": 2, 00:12:12.253 "num_base_bdevs_operational": 4, 00:12:12.253 "base_bdevs_list": [ 00:12:12.253 { 00:12:12.253 "name": null, 00:12:12.254 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:12.254 "is_configured": false, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 }, 00:12:12.254 { 00:12:12.254 "name": null, 00:12:12.254 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:12.254 "is_configured": false, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 }, 00:12:12.254 { 00:12:12.254 "name": "BaseBdev3", 00:12:12.254 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:12.254 "is_configured": true, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 }, 00:12:12.254 { 00:12:12.254 "name": "BaseBdev4", 00:12:12.254 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:12.254 "is_configured": true, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 } 00:12:12.254 ] 00:12:12.254 }' 00:12:12.254 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.254 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.820 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.820 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:12.820 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.820 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.820 [2024-10-08 16:20:06.042683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.820 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.820 "name": "Existed_Raid", 00:12:12.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.820 "strip_size_kb": 64, 00:12:12.820 "state": "configuring", 00:12:12.820 "raid_level": "raid0", 00:12:12.820 "superblock": false, 00:12:12.820 "num_base_bdevs": 4, 00:12:12.820 "num_base_bdevs_discovered": 3, 00:12:12.820 "num_base_bdevs_operational": 4, 00:12:12.820 "base_bdevs_list": [ 00:12:12.820 { 00:12:12.820 "name": null, 00:12:12.820 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:12.820 "is_configured": false, 00:12:12.820 "data_offset": 0, 00:12:12.820 "data_size": 65536 00:12:12.820 }, 00:12:12.820 { 00:12:12.820 "name": "BaseBdev2", 00:12:12.820 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:12.820 "is_configured": true, 00:12:12.820 "data_offset": 0, 00:12:12.820 "data_size": 65536 00:12:12.820 }, 00:12:12.820 { 00:12:12.820 "name": "BaseBdev3", 00:12:12.820 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:12.820 "is_configured": true, 00:12:12.820 "data_offset": 0, 00:12:12.820 "data_size": 65536 00:12:12.820 }, 00:12:12.820 { 00:12:12.820 "name": "BaseBdev4", 00:12:12.821 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:12.821 "is_configured": true, 00:12:12.821 "data_offset": 0, 00:12:12.821 "data_size": 65536 00:12:12.821 } 00:12:12.821 ] 00:12:12.821 }' 00:12:12.821 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.821 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e288cac1-c774-486d-bde2-bfd72e0ce7d2 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.387 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.387 [2024-10-08 16:20:06.707866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:13.387 [2024-10-08 16:20:06.707931] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:13.387 [2024-10-08 16:20:06.707943] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:13.387 [2024-10-08 16:20:06.708253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:13.387 [2024-10-08 16:20:06.708463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:13.387 [2024-10-08 16:20:06.708485] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:13.387 NewBaseBdev 00:12:13.387 [2024-10-08 16:20:06.708813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.646 [ 00:12:13.646 { 00:12:13.646 "name": "NewBaseBdev", 00:12:13.646 "aliases": [ 00:12:13.646 "e288cac1-c774-486d-bde2-bfd72e0ce7d2" 00:12:13.646 ], 00:12:13.646 "product_name": "Malloc disk", 00:12:13.646 "block_size": 512, 00:12:13.646 "num_blocks": 65536, 00:12:13.646 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:13.646 "assigned_rate_limits": { 00:12:13.646 "rw_ios_per_sec": 0, 00:12:13.646 "rw_mbytes_per_sec": 0, 00:12:13.646 "r_mbytes_per_sec": 0, 00:12:13.646 "w_mbytes_per_sec": 0 00:12:13.646 }, 00:12:13.646 "claimed": true, 00:12:13.646 "claim_type": "exclusive_write", 00:12:13.646 "zoned": false, 00:12:13.646 "supported_io_types": { 00:12:13.646 "read": true, 00:12:13.646 "write": true, 00:12:13.646 "unmap": true, 00:12:13.646 "flush": true, 00:12:13.646 "reset": true, 00:12:13.646 "nvme_admin": false, 00:12:13.646 "nvme_io": false, 00:12:13.646 "nvme_io_md": false, 00:12:13.646 "write_zeroes": true, 00:12:13.646 "zcopy": true, 00:12:13.646 "get_zone_info": false, 00:12:13.646 "zone_management": false, 00:12:13.646 "zone_append": false, 00:12:13.646 "compare": false, 00:12:13.646 "compare_and_write": false, 00:12:13.646 "abort": true, 00:12:13.646 "seek_hole": false, 00:12:13.646 "seek_data": false, 00:12:13.646 "copy": true, 00:12:13.646 "nvme_iov_md": false 00:12:13.646 }, 00:12:13.646 "memory_domains": [ 00:12:13.646 { 00:12:13.646 "dma_device_id": "system", 00:12:13.646 "dma_device_type": 1 00:12:13.646 }, 00:12:13.646 { 00:12:13.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.646 "dma_device_type": 2 00:12:13.646 } 00:12:13.646 ], 00:12:13.646 "driver_specific": {} 00:12:13.646 } 00:12:13.646 ] 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.646 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.647 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.647 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.647 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.647 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.647 "name": "Existed_Raid", 00:12:13.647 "uuid": "4ecfba34-7aa7-4f3f-9f2b-06a0a9089254", 00:12:13.647 "strip_size_kb": 64, 00:12:13.647 "state": "online", 00:12:13.647 "raid_level": "raid0", 00:12:13.647 "superblock": false, 00:12:13.647 "num_base_bdevs": 4, 00:12:13.647 "num_base_bdevs_discovered": 4, 00:12:13.647 "num_base_bdevs_operational": 4, 00:12:13.647 "base_bdevs_list": [ 00:12:13.647 { 00:12:13.647 "name": "NewBaseBdev", 00:12:13.647 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:13.647 "is_configured": true, 00:12:13.647 "data_offset": 0, 00:12:13.647 "data_size": 65536 00:12:13.647 }, 00:12:13.647 { 00:12:13.647 "name": "BaseBdev2", 00:12:13.647 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:13.647 "is_configured": true, 00:12:13.647 "data_offset": 0, 00:12:13.647 "data_size": 65536 00:12:13.647 }, 00:12:13.647 { 00:12:13.647 "name": "BaseBdev3", 00:12:13.647 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:13.647 "is_configured": true, 00:12:13.647 "data_offset": 0, 00:12:13.647 "data_size": 65536 00:12:13.647 }, 00:12:13.647 { 00:12:13.647 "name": "BaseBdev4", 00:12:13.647 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:13.647 "is_configured": true, 00:12:13.647 "data_offset": 0, 00:12:13.647 "data_size": 65536 00:12:13.647 } 00:12:13.647 ] 00:12:13.647 }' 00:12:13.647 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.647 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.264 [2024-10-08 16:20:07.280520] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.264 "name": "Existed_Raid", 00:12:14.264 "aliases": [ 00:12:14.264 "4ecfba34-7aa7-4f3f-9f2b-06a0a9089254" 00:12:14.264 ], 00:12:14.264 "product_name": "Raid Volume", 00:12:14.264 "block_size": 512, 00:12:14.264 "num_blocks": 262144, 00:12:14.264 "uuid": "4ecfba34-7aa7-4f3f-9f2b-06a0a9089254", 00:12:14.264 "assigned_rate_limits": { 00:12:14.264 "rw_ios_per_sec": 0, 00:12:14.264 "rw_mbytes_per_sec": 0, 00:12:14.264 "r_mbytes_per_sec": 0, 00:12:14.264 "w_mbytes_per_sec": 0 00:12:14.264 }, 00:12:14.264 "claimed": false, 00:12:14.264 "zoned": false, 00:12:14.264 "supported_io_types": { 00:12:14.264 "read": true, 00:12:14.264 "write": true, 00:12:14.264 "unmap": true, 00:12:14.264 "flush": true, 00:12:14.264 "reset": true, 00:12:14.264 "nvme_admin": false, 00:12:14.264 "nvme_io": false, 00:12:14.264 "nvme_io_md": false, 00:12:14.264 "write_zeroes": true, 00:12:14.264 "zcopy": false, 00:12:14.264 "get_zone_info": false, 00:12:14.264 "zone_management": false, 00:12:14.264 "zone_append": false, 00:12:14.264 "compare": false, 00:12:14.264 "compare_and_write": false, 00:12:14.264 "abort": false, 00:12:14.264 "seek_hole": false, 00:12:14.264 "seek_data": false, 00:12:14.264 "copy": false, 00:12:14.264 "nvme_iov_md": false 00:12:14.264 }, 00:12:14.264 "memory_domains": [ 00:12:14.264 { 00:12:14.264 "dma_device_id": "system", 00:12:14.264 "dma_device_type": 1 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.264 "dma_device_type": 2 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "system", 00:12:14.264 "dma_device_type": 1 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.264 "dma_device_type": 2 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "system", 00:12:14.264 "dma_device_type": 1 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.264 "dma_device_type": 2 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "system", 00:12:14.264 "dma_device_type": 1 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.264 "dma_device_type": 2 00:12:14.264 } 00:12:14.264 ], 00:12:14.264 "driver_specific": { 00:12:14.264 "raid": { 00:12:14.264 "uuid": "4ecfba34-7aa7-4f3f-9f2b-06a0a9089254", 00:12:14.264 "strip_size_kb": 64, 00:12:14.264 "state": "online", 00:12:14.264 "raid_level": "raid0", 00:12:14.264 "superblock": false, 00:12:14.264 "num_base_bdevs": 4, 00:12:14.264 "num_base_bdevs_discovered": 4, 00:12:14.264 "num_base_bdevs_operational": 4, 00:12:14.264 "base_bdevs_list": [ 00:12:14.264 { 00:12:14.264 "name": "NewBaseBdev", 00:12:14.264 "uuid": "e288cac1-c774-486d-bde2-bfd72e0ce7d2", 00:12:14.264 "is_configured": true, 00:12:14.264 "data_offset": 0, 00:12:14.264 "data_size": 65536 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "name": "BaseBdev2", 00:12:14.264 "uuid": "4f9214cc-e5c6-487f-8e3f-4a178e41bebb", 00:12:14.264 "is_configured": true, 00:12:14.264 "data_offset": 0, 00:12:14.264 "data_size": 65536 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "name": "BaseBdev3", 00:12:14.264 "uuid": "9b13aa4e-3c2a-4df7-a0be-3639d7052b70", 00:12:14.264 "is_configured": true, 00:12:14.264 "data_offset": 0, 00:12:14.264 "data_size": 65536 00:12:14.264 }, 00:12:14.264 { 00:12:14.264 "name": "BaseBdev4", 00:12:14.264 "uuid": "a90b2c9d-ec8b-42bc-b6dd-c6377f888b33", 00:12:14.264 "is_configured": true, 00:12:14.264 "data_offset": 0, 00:12:14.264 "data_size": 65536 00:12:14.264 } 00:12:14.264 ] 00:12:14.264 } 00:12:14.264 } 00:12:14.264 }' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:14.264 BaseBdev2 00:12:14.264 BaseBdev3 00:12:14.264 BaseBdev4' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.264 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.522 [2024-10-08 16:20:07.676166] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.522 [2024-10-08 16:20:07.676226] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.522 [2024-10-08 16:20:07.676353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.522 [2024-10-08 16:20:07.676440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.522 [2024-10-08 16:20:07.676457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69787 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69787 ']' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69787 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69787 00:12:14.522 killing process with pid 69787 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69787' 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69787 00:12:14.522 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69787 00:12:14.522 [2024-10-08 16:20:07.715022] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.779 [2024-10-08 16:20:08.075775] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:16.149 00:12:16.149 real 0m12.758s 00:12:16.149 user 0m20.917s 00:12:16.149 sys 0m1.833s 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.149 ************************************ 00:12:16.149 END TEST raid_state_function_test 00:12:16.149 ************************************ 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.149 16:20:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:16.149 16:20:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:16.149 16:20:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.149 16:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.149 ************************************ 00:12:16.149 START TEST raid_state_function_test_sb 00:12:16.149 ************************************ 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70471 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70471' 00:12:16.149 Process raid pid: 70471 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70471 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70471 ']' 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.149 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.149 [2024-10-08 16:20:09.459510] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:16.149 [2024-10-08 16:20:09.460039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.407 [2024-10-08 16:20:09.637669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.666 [2024-10-08 16:20:09.879612] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.923 [2024-10-08 16:20:10.144026] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.923 [2024-10-08 16:20:10.144538] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 [2024-10-08 16:20:10.440177] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.181 [2024-10-08 16:20:10.440249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.181 [2024-10-08 16:20:10.440277] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.181 [2024-10-08 16:20:10.440297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.181 [2024-10-08 16:20:10.440317] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.181 [2024-10-08 16:20:10.440333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.181 [2024-10-08 16:20:10.440347] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.181 [2024-10-08 16:20:10.440362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.182 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.182 "name": "Existed_Raid", 00:12:17.182 "uuid": "f04961e6-7c97-4708-a55a-d726e2a04066", 00:12:17.182 "strip_size_kb": 64, 00:12:17.182 "state": "configuring", 00:12:17.182 "raid_level": "raid0", 00:12:17.182 "superblock": true, 00:12:17.182 "num_base_bdevs": 4, 00:12:17.182 "num_base_bdevs_discovered": 0, 00:12:17.182 "num_base_bdevs_operational": 4, 00:12:17.182 "base_bdevs_list": [ 00:12:17.182 { 00:12:17.182 "name": "BaseBdev1", 00:12:17.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.182 "is_configured": false, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 0 00:12:17.182 }, 00:12:17.182 { 00:12:17.182 "name": "BaseBdev2", 00:12:17.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.182 "is_configured": false, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 0 00:12:17.182 }, 00:12:17.182 { 00:12:17.182 "name": "BaseBdev3", 00:12:17.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.182 "is_configured": false, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 0 00:12:17.182 }, 00:12:17.182 { 00:12:17.182 "name": "BaseBdev4", 00:12:17.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.182 "is_configured": false, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 0 00:12:17.182 } 00:12:17.182 ] 00:12:17.182 }' 00:12:17.182 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.182 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 [2024-10-08 16:20:10.972882] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.749 [2024-10-08 16:20:10.972946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 [2024-10-08 16:20:10.980913] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.749 [2024-10-08 16:20:10.980975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.749 [2024-10-08 16:20:10.980992] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.749 [2024-10-08 16:20:10.981009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.749 [2024-10-08 16:20:10.981019] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.749 [2024-10-08 16:20:10.981034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.749 [2024-10-08 16:20:10.981044] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.749 [2024-10-08 16:20:10.981058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 [2024-10-08 16:20:11.057925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.749 BaseBdev1 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.066 [ 00:12:18.066 { 00:12:18.066 "name": "BaseBdev1", 00:12:18.066 "aliases": [ 00:12:18.066 "0986d64f-c803-4ea0-9811-0a666b5da80c" 00:12:18.066 ], 00:12:18.066 "product_name": "Malloc disk", 00:12:18.066 "block_size": 512, 00:12:18.066 "num_blocks": 65536, 00:12:18.066 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:18.066 "assigned_rate_limits": { 00:12:18.066 "rw_ios_per_sec": 0, 00:12:18.066 "rw_mbytes_per_sec": 0, 00:12:18.066 "r_mbytes_per_sec": 0, 00:12:18.066 "w_mbytes_per_sec": 0 00:12:18.066 }, 00:12:18.066 "claimed": true, 00:12:18.066 "claim_type": "exclusive_write", 00:12:18.066 "zoned": false, 00:12:18.066 "supported_io_types": { 00:12:18.066 "read": true, 00:12:18.066 "write": true, 00:12:18.066 "unmap": true, 00:12:18.066 "flush": true, 00:12:18.066 "reset": true, 00:12:18.066 "nvme_admin": false, 00:12:18.066 "nvme_io": false, 00:12:18.066 "nvme_io_md": false, 00:12:18.066 "write_zeroes": true, 00:12:18.066 "zcopy": true, 00:12:18.066 "get_zone_info": false, 00:12:18.066 "zone_management": false, 00:12:18.066 "zone_append": false, 00:12:18.066 "compare": false, 00:12:18.066 "compare_and_write": false, 00:12:18.066 "abort": true, 00:12:18.066 "seek_hole": false, 00:12:18.066 "seek_data": false, 00:12:18.066 "copy": true, 00:12:18.066 "nvme_iov_md": false 00:12:18.066 }, 00:12:18.066 "memory_domains": [ 00:12:18.066 { 00:12:18.066 "dma_device_id": "system", 00:12:18.066 "dma_device_type": 1 00:12:18.066 }, 00:12:18.066 { 00:12:18.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.066 "dma_device_type": 2 00:12:18.066 } 00:12:18.066 ], 00:12:18.066 "driver_specific": {} 00:12:18.066 } 00:12:18.066 ] 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.066 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.066 "name": "Existed_Raid", 00:12:18.066 "uuid": "4e1983c5-cf3c-4569-a00e-970b194d23e9", 00:12:18.066 "strip_size_kb": 64, 00:12:18.067 "state": "configuring", 00:12:18.067 "raid_level": "raid0", 00:12:18.067 "superblock": true, 00:12:18.067 "num_base_bdevs": 4, 00:12:18.067 "num_base_bdevs_discovered": 1, 00:12:18.067 "num_base_bdevs_operational": 4, 00:12:18.067 "base_bdevs_list": [ 00:12:18.067 { 00:12:18.067 "name": "BaseBdev1", 00:12:18.067 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:18.067 "is_configured": true, 00:12:18.067 "data_offset": 2048, 00:12:18.067 "data_size": 63488 00:12:18.067 }, 00:12:18.067 { 00:12:18.067 "name": "BaseBdev2", 00:12:18.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.067 "is_configured": false, 00:12:18.067 "data_offset": 0, 00:12:18.067 "data_size": 0 00:12:18.067 }, 00:12:18.067 { 00:12:18.067 "name": "BaseBdev3", 00:12:18.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.067 "is_configured": false, 00:12:18.067 "data_offset": 0, 00:12:18.067 "data_size": 0 00:12:18.067 }, 00:12:18.067 { 00:12:18.067 "name": "BaseBdev4", 00:12:18.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.067 "is_configured": false, 00:12:18.067 "data_offset": 0, 00:12:18.067 "data_size": 0 00:12:18.067 } 00:12:18.067 ] 00:12:18.067 }' 00:12:18.067 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.067 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.326 [2024-10-08 16:20:11.642184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.326 [2024-10-08 16:20:11.642290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.326 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.584 [2024-10-08 16:20:11.650214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.584 [2024-10-08 16:20:11.653136] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.584 [2024-10-08 16:20:11.653199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.584 [2024-10-08 16:20:11.653217] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.584 [2024-10-08 16:20:11.653236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.584 [2024-10-08 16:20:11.653246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:18.584 [2024-10-08 16:20:11.653260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:18.584 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.584 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.585 "name": "Existed_Raid", 00:12:18.585 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:18.585 "strip_size_kb": 64, 00:12:18.585 "state": "configuring", 00:12:18.585 "raid_level": "raid0", 00:12:18.585 "superblock": true, 00:12:18.585 "num_base_bdevs": 4, 00:12:18.585 "num_base_bdevs_discovered": 1, 00:12:18.585 "num_base_bdevs_operational": 4, 00:12:18.585 "base_bdevs_list": [ 00:12:18.585 { 00:12:18.585 "name": "BaseBdev1", 00:12:18.585 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:18.585 "is_configured": true, 00:12:18.585 "data_offset": 2048, 00:12:18.585 "data_size": 63488 00:12:18.585 }, 00:12:18.585 { 00:12:18.585 "name": "BaseBdev2", 00:12:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.585 "is_configured": false, 00:12:18.585 "data_offset": 0, 00:12:18.585 "data_size": 0 00:12:18.585 }, 00:12:18.585 { 00:12:18.585 "name": "BaseBdev3", 00:12:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.585 "is_configured": false, 00:12:18.585 "data_offset": 0, 00:12:18.585 "data_size": 0 00:12:18.585 }, 00:12:18.585 { 00:12:18.585 "name": "BaseBdev4", 00:12:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.585 "is_configured": false, 00:12:18.585 "data_offset": 0, 00:12:18.585 "data_size": 0 00:12:18.585 } 00:12:18.585 ] 00:12:18.585 }' 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.585 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.153 [2024-10-08 16:20:12.226460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.153 BaseBdev2 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.153 [ 00:12:19.153 { 00:12:19.153 "name": "BaseBdev2", 00:12:19.153 "aliases": [ 00:12:19.153 "36ce4d97-d143-4c98-92b2-74d1f0c027a0" 00:12:19.153 ], 00:12:19.153 "product_name": "Malloc disk", 00:12:19.153 "block_size": 512, 00:12:19.153 "num_blocks": 65536, 00:12:19.153 "uuid": "36ce4d97-d143-4c98-92b2-74d1f0c027a0", 00:12:19.153 "assigned_rate_limits": { 00:12:19.153 "rw_ios_per_sec": 0, 00:12:19.153 "rw_mbytes_per_sec": 0, 00:12:19.153 "r_mbytes_per_sec": 0, 00:12:19.153 "w_mbytes_per_sec": 0 00:12:19.153 }, 00:12:19.153 "claimed": true, 00:12:19.153 "claim_type": "exclusive_write", 00:12:19.153 "zoned": false, 00:12:19.153 "supported_io_types": { 00:12:19.153 "read": true, 00:12:19.153 "write": true, 00:12:19.153 "unmap": true, 00:12:19.153 "flush": true, 00:12:19.153 "reset": true, 00:12:19.153 "nvme_admin": false, 00:12:19.153 "nvme_io": false, 00:12:19.153 "nvme_io_md": false, 00:12:19.153 "write_zeroes": true, 00:12:19.153 "zcopy": true, 00:12:19.153 "get_zone_info": false, 00:12:19.153 "zone_management": false, 00:12:19.153 "zone_append": false, 00:12:19.153 "compare": false, 00:12:19.153 "compare_and_write": false, 00:12:19.153 "abort": true, 00:12:19.153 "seek_hole": false, 00:12:19.153 "seek_data": false, 00:12:19.153 "copy": true, 00:12:19.153 "nvme_iov_md": false 00:12:19.153 }, 00:12:19.153 "memory_domains": [ 00:12:19.153 { 00:12:19.153 "dma_device_id": "system", 00:12:19.153 "dma_device_type": 1 00:12:19.153 }, 00:12:19.153 { 00:12:19.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.153 "dma_device_type": 2 00:12:19.153 } 00:12:19.153 ], 00:12:19.153 "driver_specific": {} 00:12:19.153 } 00:12:19.153 ] 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.153 "name": "Existed_Raid", 00:12:19.153 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:19.153 "strip_size_kb": 64, 00:12:19.153 "state": "configuring", 00:12:19.153 "raid_level": "raid0", 00:12:19.153 "superblock": true, 00:12:19.153 "num_base_bdevs": 4, 00:12:19.153 "num_base_bdevs_discovered": 2, 00:12:19.153 "num_base_bdevs_operational": 4, 00:12:19.153 "base_bdevs_list": [ 00:12:19.153 { 00:12:19.153 "name": "BaseBdev1", 00:12:19.153 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:19.153 "is_configured": true, 00:12:19.153 "data_offset": 2048, 00:12:19.153 "data_size": 63488 00:12:19.153 }, 00:12:19.153 { 00:12:19.153 "name": "BaseBdev2", 00:12:19.153 "uuid": "36ce4d97-d143-4c98-92b2-74d1f0c027a0", 00:12:19.153 "is_configured": true, 00:12:19.153 "data_offset": 2048, 00:12:19.153 "data_size": 63488 00:12:19.153 }, 00:12:19.153 { 00:12:19.153 "name": "BaseBdev3", 00:12:19.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.153 "is_configured": false, 00:12:19.153 "data_offset": 0, 00:12:19.153 "data_size": 0 00:12:19.153 }, 00:12:19.153 { 00:12:19.153 "name": "BaseBdev4", 00:12:19.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.153 "is_configured": false, 00:12:19.153 "data_offset": 0, 00:12:19.153 "data_size": 0 00:12:19.153 } 00:12:19.153 ] 00:12:19.153 }' 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.153 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.721 [2024-10-08 16:20:12.854826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.721 BaseBdev3 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.721 [ 00:12:19.721 { 00:12:19.721 "name": "BaseBdev3", 00:12:19.721 "aliases": [ 00:12:19.721 "8e85f6a7-2a9e-48b8-a1d3-984c939678bd" 00:12:19.721 ], 00:12:19.721 "product_name": "Malloc disk", 00:12:19.721 "block_size": 512, 00:12:19.721 "num_blocks": 65536, 00:12:19.721 "uuid": "8e85f6a7-2a9e-48b8-a1d3-984c939678bd", 00:12:19.721 "assigned_rate_limits": { 00:12:19.721 "rw_ios_per_sec": 0, 00:12:19.721 "rw_mbytes_per_sec": 0, 00:12:19.721 "r_mbytes_per_sec": 0, 00:12:19.721 "w_mbytes_per_sec": 0 00:12:19.721 }, 00:12:19.721 "claimed": true, 00:12:19.721 "claim_type": "exclusive_write", 00:12:19.721 "zoned": false, 00:12:19.721 "supported_io_types": { 00:12:19.721 "read": true, 00:12:19.721 "write": true, 00:12:19.721 "unmap": true, 00:12:19.721 "flush": true, 00:12:19.721 "reset": true, 00:12:19.721 "nvme_admin": false, 00:12:19.721 "nvme_io": false, 00:12:19.721 "nvme_io_md": false, 00:12:19.721 "write_zeroes": true, 00:12:19.721 "zcopy": true, 00:12:19.721 "get_zone_info": false, 00:12:19.721 "zone_management": false, 00:12:19.721 "zone_append": false, 00:12:19.721 "compare": false, 00:12:19.721 "compare_and_write": false, 00:12:19.721 "abort": true, 00:12:19.721 "seek_hole": false, 00:12:19.721 "seek_data": false, 00:12:19.721 "copy": true, 00:12:19.721 "nvme_iov_md": false 00:12:19.721 }, 00:12:19.721 "memory_domains": [ 00:12:19.721 { 00:12:19.721 "dma_device_id": "system", 00:12:19.721 "dma_device_type": 1 00:12:19.721 }, 00:12:19.721 { 00:12:19.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.721 "dma_device_type": 2 00:12:19.721 } 00:12:19.721 ], 00:12:19.721 "driver_specific": {} 00:12:19.721 } 00:12:19.721 ] 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.721 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.721 "name": "Existed_Raid", 00:12:19.721 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:19.721 "strip_size_kb": 64, 00:12:19.721 "state": "configuring", 00:12:19.721 "raid_level": "raid0", 00:12:19.721 "superblock": true, 00:12:19.721 "num_base_bdevs": 4, 00:12:19.721 "num_base_bdevs_discovered": 3, 00:12:19.721 "num_base_bdevs_operational": 4, 00:12:19.721 "base_bdevs_list": [ 00:12:19.721 { 00:12:19.721 "name": "BaseBdev1", 00:12:19.721 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:19.721 "is_configured": true, 00:12:19.721 "data_offset": 2048, 00:12:19.721 "data_size": 63488 00:12:19.722 }, 00:12:19.722 { 00:12:19.722 "name": "BaseBdev2", 00:12:19.722 "uuid": "36ce4d97-d143-4c98-92b2-74d1f0c027a0", 00:12:19.722 "is_configured": true, 00:12:19.722 "data_offset": 2048, 00:12:19.722 "data_size": 63488 00:12:19.722 }, 00:12:19.722 { 00:12:19.722 "name": "BaseBdev3", 00:12:19.722 "uuid": "8e85f6a7-2a9e-48b8-a1d3-984c939678bd", 00:12:19.722 "is_configured": true, 00:12:19.722 "data_offset": 2048, 00:12:19.722 "data_size": 63488 00:12:19.722 }, 00:12:19.722 { 00:12:19.722 "name": "BaseBdev4", 00:12:19.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.722 "is_configured": false, 00:12:19.722 "data_offset": 0, 00:12:19.722 "data_size": 0 00:12:19.722 } 00:12:19.722 ] 00:12:19.722 }' 00:12:19.722 16:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.722 16:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 [2024-10-08 16:20:13.491026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.290 [2024-10-08 16:20:13.491503] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:20.290 [2024-10-08 16:20:13.491555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:20.290 BaseBdev4 00:12:20.290 [2024-10-08 16:20:13.491916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:20.290 [2024-10-08 16:20:13.492118] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:20.290 [2024-10-08 16:20:13.492148] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:20.290 [2024-10-08 16:20:13.492330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.290 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 [ 00:12:20.290 { 00:12:20.290 "name": "BaseBdev4", 00:12:20.290 "aliases": [ 00:12:20.290 "1b072aba-652f-4d50-84e0-0b78e7a79604" 00:12:20.290 ], 00:12:20.290 "product_name": "Malloc disk", 00:12:20.290 "block_size": 512, 00:12:20.290 "num_blocks": 65536, 00:12:20.290 "uuid": "1b072aba-652f-4d50-84e0-0b78e7a79604", 00:12:20.290 "assigned_rate_limits": { 00:12:20.290 "rw_ios_per_sec": 0, 00:12:20.290 "rw_mbytes_per_sec": 0, 00:12:20.290 "r_mbytes_per_sec": 0, 00:12:20.290 "w_mbytes_per_sec": 0 00:12:20.290 }, 00:12:20.290 "claimed": true, 00:12:20.290 "claim_type": "exclusive_write", 00:12:20.291 "zoned": false, 00:12:20.291 "supported_io_types": { 00:12:20.291 "read": true, 00:12:20.291 "write": true, 00:12:20.291 "unmap": true, 00:12:20.291 "flush": true, 00:12:20.291 "reset": true, 00:12:20.291 "nvme_admin": false, 00:12:20.291 "nvme_io": false, 00:12:20.291 "nvme_io_md": false, 00:12:20.291 "write_zeroes": true, 00:12:20.291 "zcopy": true, 00:12:20.291 "get_zone_info": false, 00:12:20.291 "zone_management": false, 00:12:20.291 "zone_append": false, 00:12:20.291 "compare": false, 00:12:20.291 "compare_and_write": false, 00:12:20.291 "abort": true, 00:12:20.291 "seek_hole": false, 00:12:20.291 "seek_data": false, 00:12:20.291 "copy": true, 00:12:20.291 "nvme_iov_md": false 00:12:20.291 }, 00:12:20.291 "memory_domains": [ 00:12:20.291 { 00:12:20.291 "dma_device_id": "system", 00:12:20.291 "dma_device_type": 1 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.291 "dma_device_type": 2 00:12:20.291 } 00:12:20.291 ], 00:12:20.291 "driver_specific": {} 00:12:20.291 } 00:12:20.291 ] 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.291 "name": "Existed_Raid", 00:12:20.291 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:20.291 "strip_size_kb": 64, 00:12:20.291 "state": "online", 00:12:20.291 "raid_level": "raid0", 00:12:20.291 "superblock": true, 00:12:20.291 "num_base_bdevs": 4, 00:12:20.291 "num_base_bdevs_discovered": 4, 00:12:20.291 "num_base_bdevs_operational": 4, 00:12:20.291 "base_bdevs_list": [ 00:12:20.291 { 00:12:20.291 "name": "BaseBdev1", 00:12:20.291 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:20.291 "is_configured": true, 00:12:20.291 "data_offset": 2048, 00:12:20.291 "data_size": 63488 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "name": "BaseBdev2", 00:12:20.291 "uuid": "36ce4d97-d143-4c98-92b2-74d1f0c027a0", 00:12:20.291 "is_configured": true, 00:12:20.291 "data_offset": 2048, 00:12:20.291 "data_size": 63488 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "name": "BaseBdev3", 00:12:20.291 "uuid": "8e85f6a7-2a9e-48b8-a1d3-984c939678bd", 00:12:20.291 "is_configured": true, 00:12:20.291 "data_offset": 2048, 00:12:20.291 "data_size": 63488 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "name": "BaseBdev4", 00:12:20.291 "uuid": "1b072aba-652f-4d50-84e0-0b78e7a79604", 00:12:20.291 "is_configured": true, 00:12:20.291 "data_offset": 2048, 00:12:20.291 "data_size": 63488 00:12:20.291 } 00:12:20.291 ] 00:12:20.291 }' 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.291 16:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.858 [2024-10-08 16:20:14.059776] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.858 "name": "Existed_Raid", 00:12:20.858 "aliases": [ 00:12:20.858 "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36" 00:12:20.858 ], 00:12:20.858 "product_name": "Raid Volume", 00:12:20.858 "block_size": 512, 00:12:20.858 "num_blocks": 253952, 00:12:20.858 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:20.858 "assigned_rate_limits": { 00:12:20.858 "rw_ios_per_sec": 0, 00:12:20.858 "rw_mbytes_per_sec": 0, 00:12:20.858 "r_mbytes_per_sec": 0, 00:12:20.858 "w_mbytes_per_sec": 0 00:12:20.858 }, 00:12:20.858 "claimed": false, 00:12:20.858 "zoned": false, 00:12:20.858 "supported_io_types": { 00:12:20.858 "read": true, 00:12:20.858 "write": true, 00:12:20.858 "unmap": true, 00:12:20.858 "flush": true, 00:12:20.858 "reset": true, 00:12:20.858 "nvme_admin": false, 00:12:20.858 "nvme_io": false, 00:12:20.858 "nvme_io_md": false, 00:12:20.858 "write_zeroes": true, 00:12:20.858 "zcopy": false, 00:12:20.858 "get_zone_info": false, 00:12:20.858 "zone_management": false, 00:12:20.858 "zone_append": false, 00:12:20.858 "compare": false, 00:12:20.858 "compare_and_write": false, 00:12:20.858 "abort": false, 00:12:20.858 "seek_hole": false, 00:12:20.858 "seek_data": false, 00:12:20.858 "copy": false, 00:12:20.858 "nvme_iov_md": false 00:12:20.858 }, 00:12:20.858 "memory_domains": [ 00:12:20.858 { 00:12:20.858 "dma_device_id": "system", 00:12:20.858 "dma_device_type": 1 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.858 "dma_device_type": 2 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "system", 00:12:20.858 "dma_device_type": 1 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.858 "dma_device_type": 2 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "system", 00:12:20.858 "dma_device_type": 1 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.858 "dma_device_type": 2 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "system", 00:12:20.858 "dma_device_type": 1 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.858 "dma_device_type": 2 00:12:20.858 } 00:12:20.858 ], 00:12:20.858 "driver_specific": { 00:12:20.858 "raid": { 00:12:20.858 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:20.858 "strip_size_kb": 64, 00:12:20.858 "state": "online", 00:12:20.858 "raid_level": "raid0", 00:12:20.858 "superblock": true, 00:12:20.858 "num_base_bdevs": 4, 00:12:20.858 "num_base_bdevs_discovered": 4, 00:12:20.858 "num_base_bdevs_operational": 4, 00:12:20.858 "base_bdevs_list": [ 00:12:20.858 { 00:12:20.858 "name": "BaseBdev1", 00:12:20.858 "uuid": "0986d64f-c803-4ea0-9811-0a666b5da80c", 00:12:20.858 "is_configured": true, 00:12:20.858 "data_offset": 2048, 00:12:20.858 "data_size": 63488 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "name": "BaseBdev2", 00:12:20.858 "uuid": "36ce4d97-d143-4c98-92b2-74d1f0c027a0", 00:12:20.858 "is_configured": true, 00:12:20.858 "data_offset": 2048, 00:12:20.858 "data_size": 63488 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "name": "BaseBdev3", 00:12:20.858 "uuid": "8e85f6a7-2a9e-48b8-a1d3-984c939678bd", 00:12:20.858 "is_configured": true, 00:12:20.858 "data_offset": 2048, 00:12:20.858 "data_size": 63488 00:12:20.858 }, 00:12:20.858 { 00:12:20.858 "name": "BaseBdev4", 00:12:20.858 "uuid": "1b072aba-652f-4d50-84e0-0b78e7a79604", 00:12:20.858 "is_configured": true, 00:12:20.858 "data_offset": 2048, 00:12:20.858 "data_size": 63488 00:12:20.858 } 00:12:20.858 ] 00:12:20.858 } 00:12:20.858 } 00:12:20.858 }' 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:20.858 BaseBdev2 00:12:20.858 BaseBdev3 00:12:20.858 BaseBdev4' 00:12:20.858 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.117 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 [2024-10-08 16:20:14.427482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.117 [2024-10-08 16:20:14.427577] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.117 [2024-10-08 16:20:14.427671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.376 "name": "Existed_Raid", 00:12:21.376 "uuid": "7e1a33a9-ed0d-4efc-b39c-320cf5a05d36", 00:12:21.376 "strip_size_kb": 64, 00:12:21.376 "state": "offline", 00:12:21.376 "raid_level": "raid0", 00:12:21.376 "superblock": true, 00:12:21.376 "num_base_bdevs": 4, 00:12:21.376 "num_base_bdevs_discovered": 3, 00:12:21.376 "num_base_bdevs_operational": 3, 00:12:21.376 "base_bdevs_list": [ 00:12:21.376 { 00:12:21.376 "name": null, 00:12:21.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.376 "is_configured": false, 00:12:21.376 "data_offset": 0, 00:12:21.376 "data_size": 63488 00:12:21.376 }, 00:12:21.376 { 00:12:21.376 "name": "BaseBdev2", 00:12:21.376 "uuid": "36ce4d97-d143-4c98-92b2-74d1f0c027a0", 00:12:21.376 "is_configured": true, 00:12:21.376 "data_offset": 2048, 00:12:21.376 "data_size": 63488 00:12:21.376 }, 00:12:21.376 { 00:12:21.376 "name": "BaseBdev3", 00:12:21.376 "uuid": "8e85f6a7-2a9e-48b8-a1d3-984c939678bd", 00:12:21.376 "is_configured": true, 00:12:21.376 "data_offset": 2048, 00:12:21.376 "data_size": 63488 00:12:21.376 }, 00:12:21.376 { 00:12:21.376 "name": "BaseBdev4", 00:12:21.376 "uuid": "1b072aba-652f-4d50-84e0-0b78e7a79604", 00:12:21.376 "is_configured": true, 00:12:21.376 "data_offset": 2048, 00:12:21.376 "data_size": 63488 00:12:21.376 } 00:12:21.376 ] 00:12:21.376 }' 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.376 16:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.942 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:21.942 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.942 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.942 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.943 [2024-10-08 16:20:15.115202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.943 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.201 [2024-10-08 16:20:15.270464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.201 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.201 [2024-10-08 16:20:15.426602] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:22.201 [2024-10-08 16:20:15.426681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.474 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 BaseBdev2 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 [ 00:12:22.475 { 00:12:22.475 "name": "BaseBdev2", 00:12:22.475 "aliases": [ 00:12:22.475 "a6e76694-2e63-4127-a74a-4ac3bebdfe1e" 00:12:22.475 ], 00:12:22.475 "product_name": "Malloc disk", 00:12:22.475 "block_size": 512, 00:12:22.475 "num_blocks": 65536, 00:12:22.475 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:22.475 "assigned_rate_limits": { 00:12:22.475 "rw_ios_per_sec": 0, 00:12:22.475 "rw_mbytes_per_sec": 0, 00:12:22.475 "r_mbytes_per_sec": 0, 00:12:22.475 "w_mbytes_per_sec": 0 00:12:22.475 }, 00:12:22.475 "claimed": false, 00:12:22.475 "zoned": false, 00:12:22.475 "supported_io_types": { 00:12:22.475 "read": true, 00:12:22.475 "write": true, 00:12:22.475 "unmap": true, 00:12:22.475 "flush": true, 00:12:22.475 "reset": true, 00:12:22.475 "nvme_admin": false, 00:12:22.475 "nvme_io": false, 00:12:22.475 "nvme_io_md": false, 00:12:22.475 "write_zeroes": true, 00:12:22.475 "zcopy": true, 00:12:22.475 "get_zone_info": false, 00:12:22.475 "zone_management": false, 00:12:22.475 "zone_append": false, 00:12:22.475 "compare": false, 00:12:22.475 "compare_and_write": false, 00:12:22.475 "abort": true, 00:12:22.475 "seek_hole": false, 00:12:22.475 "seek_data": false, 00:12:22.475 "copy": true, 00:12:22.475 "nvme_iov_md": false 00:12:22.475 }, 00:12:22.475 "memory_domains": [ 00:12:22.475 { 00:12:22.475 "dma_device_id": "system", 00:12:22.475 "dma_device_type": 1 00:12:22.475 }, 00:12:22.475 { 00:12:22.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.475 "dma_device_type": 2 00:12:22.475 } 00:12:22.475 ], 00:12:22.475 "driver_specific": {} 00:12:22.475 } 00:12:22.475 ] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 BaseBdev3 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 [ 00:12:22.475 { 00:12:22.475 "name": "BaseBdev3", 00:12:22.475 "aliases": [ 00:12:22.475 "28b88576-7c79-4eaf-b56a-ecb55a556748" 00:12:22.475 ], 00:12:22.475 "product_name": "Malloc disk", 00:12:22.475 "block_size": 512, 00:12:22.475 "num_blocks": 65536, 00:12:22.475 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:22.475 "assigned_rate_limits": { 00:12:22.475 "rw_ios_per_sec": 0, 00:12:22.475 "rw_mbytes_per_sec": 0, 00:12:22.475 "r_mbytes_per_sec": 0, 00:12:22.475 "w_mbytes_per_sec": 0 00:12:22.475 }, 00:12:22.475 "claimed": false, 00:12:22.475 "zoned": false, 00:12:22.475 "supported_io_types": { 00:12:22.475 "read": true, 00:12:22.475 "write": true, 00:12:22.475 "unmap": true, 00:12:22.475 "flush": true, 00:12:22.475 "reset": true, 00:12:22.475 "nvme_admin": false, 00:12:22.475 "nvme_io": false, 00:12:22.475 "nvme_io_md": false, 00:12:22.475 "write_zeroes": true, 00:12:22.475 "zcopy": true, 00:12:22.475 "get_zone_info": false, 00:12:22.475 "zone_management": false, 00:12:22.475 "zone_append": false, 00:12:22.475 "compare": false, 00:12:22.475 "compare_and_write": false, 00:12:22.475 "abort": true, 00:12:22.475 "seek_hole": false, 00:12:22.475 "seek_data": false, 00:12:22.475 "copy": true, 00:12:22.475 "nvme_iov_md": false 00:12:22.475 }, 00:12:22.475 "memory_domains": [ 00:12:22.475 { 00:12:22.475 "dma_device_id": "system", 00:12:22.475 "dma_device_type": 1 00:12:22.475 }, 00:12:22.475 { 00:12:22.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.475 "dma_device_type": 2 00:12:22.475 } 00:12:22.475 ], 00:12:22.475 "driver_specific": {} 00:12:22.475 } 00:12:22.475 ] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 BaseBdev4 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.475 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 [ 00:12:22.733 { 00:12:22.733 "name": "BaseBdev4", 00:12:22.733 "aliases": [ 00:12:22.733 "de129351-dd33-4c63-adb9-a953b3ea8ea7" 00:12:22.733 ], 00:12:22.733 "product_name": "Malloc disk", 00:12:22.733 "block_size": 512, 00:12:22.733 "num_blocks": 65536, 00:12:22.733 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:22.733 "assigned_rate_limits": { 00:12:22.733 "rw_ios_per_sec": 0, 00:12:22.733 "rw_mbytes_per_sec": 0, 00:12:22.733 "r_mbytes_per_sec": 0, 00:12:22.733 "w_mbytes_per_sec": 0 00:12:22.733 }, 00:12:22.733 "claimed": false, 00:12:22.733 "zoned": false, 00:12:22.733 "supported_io_types": { 00:12:22.733 "read": true, 00:12:22.733 "write": true, 00:12:22.733 "unmap": true, 00:12:22.733 "flush": true, 00:12:22.733 "reset": true, 00:12:22.733 "nvme_admin": false, 00:12:22.733 "nvme_io": false, 00:12:22.733 "nvme_io_md": false, 00:12:22.733 "write_zeroes": true, 00:12:22.733 "zcopy": true, 00:12:22.733 "get_zone_info": false, 00:12:22.733 "zone_management": false, 00:12:22.733 "zone_append": false, 00:12:22.733 "compare": false, 00:12:22.733 "compare_and_write": false, 00:12:22.733 "abort": true, 00:12:22.733 "seek_hole": false, 00:12:22.733 "seek_data": false, 00:12:22.733 "copy": true, 00:12:22.733 "nvme_iov_md": false 00:12:22.733 }, 00:12:22.733 "memory_domains": [ 00:12:22.733 { 00:12:22.733 "dma_device_id": "system", 00:12:22.733 "dma_device_type": 1 00:12:22.733 }, 00:12:22.733 { 00:12:22.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.733 "dma_device_type": 2 00:12:22.733 } 00:12:22.733 ], 00:12:22.733 "driver_specific": {} 00:12:22.733 } 00:12:22.733 ] 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 [2024-10-08 16:20:15.820769] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.733 [2024-10-08 16:20:15.820838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.733 [2024-10-08 16:20:15.820881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.733 [2024-10-08 16:20:15.823666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.733 [2024-10-08 16:20:15.823749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.733 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.734 "name": "Existed_Raid", 00:12:22.734 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:22.734 "strip_size_kb": 64, 00:12:22.734 "state": "configuring", 00:12:22.734 "raid_level": "raid0", 00:12:22.734 "superblock": true, 00:12:22.734 "num_base_bdevs": 4, 00:12:22.734 "num_base_bdevs_discovered": 3, 00:12:22.734 "num_base_bdevs_operational": 4, 00:12:22.734 "base_bdevs_list": [ 00:12:22.734 { 00:12:22.734 "name": "BaseBdev1", 00:12:22.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.734 "is_configured": false, 00:12:22.734 "data_offset": 0, 00:12:22.734 "data_size": 0 00:12:22.734 }, 00:12:22.734 { 00:12:22.734 "name": "BaseBdev2", 00:12:22.734 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:22.734 "is_configured": true, 00:12:22.734 "data_offset": 2048, 00:12:22.734 "data_size": 63488 00:12:22.734 }, 00:12:22.734 { 00:12:22.734 "name": "BaseBdev3", 00:12:22.734 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:22.734 "is_configured": true, 00:12:22.734 "data_offset": 2048, 00:12:22.734 "data_size": 63488 00:12:22.734 }, 00:12:22.734 { 00:12:22.734 "name": "BaseBdev4", 00:12:22.734 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:22.734 "is_configured": true, 00:12:22.734 "data_offset": 2048, 00:12:22.734 "data_size": 63488 00:12:22.734 } 00:12:22.734 ] 00:12:22.734 }' 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.734 16:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 [2024-10-08 16:20:16.360832] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.299 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.299 "name": "Existed_Raid", 00:12:23.299 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:23.299 "strip_size_kb": 64, 00:12:23.299 "state": "configuring", 00:12:23.299 "raid_level": "raid0", 00:12:23.299 "superblock": true, 00:12:23.299 "num_base_bdevs": 4, 00:12:23.299 "num_base_bdevs_discovered": 2, 00:12:23.299 "num_base_bdevs_operational": 4, 00:12:23.299 "base_bdevs_list": [ 00:12:23.299 { 00:12:23.299 "name": "BaseBdev1", 00:12:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.299 "is_configured": false, 00:12:23.299 "data_offset": 0, 00:12:23.299 "data_size": 0 00:12:23.299 }, 00:12:23.299 { 00:12:23.299 "name": null, 00:12:23.299 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:23.299 "is_configured": false, 00:12:23.299 "data_offset": 0, 00:12:23.299 "data_size": 63488 00:12:23.299 }, 00:12:23.299 { 00:12:23.299 "name": "BaseBdev3", 00:12:23.299 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:23.299 "is_configured": true, 00:12:23.299 "data_offset": 2048, 00:12:23.299 "data_size": 63488 00:12:23.299 }, 00:12:23.299 { 00:12:23.299 "name": "BaseBdev4", 00:12:23.299 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:23.299 "is_configured": true, 00:12:23.300 "data_offset": 2048, 00:12:23.300 "data_size": 63488 00:12:23.300 } 00:12:23.300 ] 00:12:23.300 }' 00:12:23.300 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.300 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.864 16:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 [2024-10-08 16:20:17.026688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.864 BaseBdev1 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 [ 00:12:23.864 { 00:12:23.864 "name": "BaseBdev1", 00:12:23.864 "aliases": [ 00:12:23.864 "88a877f0-99a3-4e59-849a-f4852d5a44c6" 00:12:23.864 ], 00:12:23.864 "product_name": "Malloc disk", 00:12:23.864 "block_size": 512, 00:12:23.864 "num_blocks": 65536, 00:12:23.864 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:23.864 "assigned_rate_limits": { 00:12:23.864 "rw_ios_per_sec": 0, 00:12:23.864 "rw_mbytes_per_sec": 0, 00:12:23.864 "r_mbytes_per_sec": 0, 00:12:23.864 "w_mbytes_per_sec": 0 00:12:23.864 }, 00:12:23.864 "claimed": true, 00:12:23.864 "claim_type": "exclusive_write", 00:12:23.864 "zoned": false, 00:12:23.864 "supported_io_types": { 00:12:23.864 "read": true, 00:12:23.864 "write": true, 00:12:23.864 "unmap": true, 00:12:23.864 "flush": true, 00:12:23.864 "reset": true, 00:12:23.864 "nvme_admin": false, 00:12:23.864 "nvme_io": false, 00:12:23.864 "nvme_io_md": false, 00:12:23.864 "write_zeroes": true, 00:12:23.864 "zcopy": true, 00:12:23.864 "get_zone_info": false, 00:12:23.864 "zone_management": false, 00:12:23.864 "zone_append": false, 00:12:23.864 "compare": false, 00:12:23.864 "compare_and_write": false, 00:12:23.864 "abort": true, 00:12:23.864 "seek_hole": false, 00:12:23.864 "seek_data": false, 00:12:23.864 "copy": true, 00:12:23.864 "nvme_iov_md": false 00:12:23.864 }, 00:12:23.864 "memory_domains": [ 00:12:23.864 { 00:12:23.864 "dma_device_id": "system", 00:12:23.864 "dma_device_type": 1 00:12:23.864 }, 00:12:23.864 { 00:12:23.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.864 "dma_device_type": 2 00:12:23.864 } 00:12:23.864 ], 00:12:23.864 "driver_specific": {} 00:12:23.864 } 00:12:23.864 ] 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.864 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.864 "name": "Existed_Raid", 00:12:23.864 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:23.865 "strip_size_kb": 64, 00:12:23.865 "state": "configuring", 00:12:23.865 "raid_level": "raid0", 00:12:23.865 "superblock": true, 00:12:23.865 "num_base_bdevs": 4, 00:12:23.865 "num_base_bdevs_discovered": 3, 00:12:23.865 "num_base_bdevs_operational": 4, 00:12:23.865 "base_bdevs_list": [ 00:12:23.865 { 00:12:23.865 "name": "BaseBdev1", 00:12:23.865 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:23.865 "is_configured": true, 00:12:23.865 "data_offset": 2048, 00:12:23.865 "data_size": 63488 00:12:23.865 }, 00:12:23.865 { 00:12:23.865 "name": null, 00:12:23.865 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:23.865 "is_configured": false, 00:12:23.865 "data_offset": 0, 00:12:23.865 "data_size": 63488 00:12:23.865 }, 00:12:23.865 { 00:12:23.865 "name": "BaseBdev3", 00:12:23.865 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:23.865 "is_configured": true, 00:12:23.865 "data_offset": 2048, 00:12:23.865 "data_size": 63488 00:12:23.865 }, 00:12:23.865 { 00:12:23.865 "name": "BaseBdev4", 00:12:23.865 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:23.865 "is_configured": true, 00:12:23.865 "data_offset": 2048, 00:12:23.865 "data_size": 63488 00:12:23.865 } 00:12:23.865 ] 00:12:23.865 }' 00:12:23.865 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.865 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.433 [2024-10-08 16:20:17.666986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.433 "name": "Existed_Raid", 00:12:24.433 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:24.433 "strip_size_kb": 64, 00:12:24.433 "state": "configuring", 00:12:24.433 "raid_level": "raid0", 00:12:24.433 "superblock": true, 00:12:24.433 "num_base_bdevs": 4, 00:12:24.433 "num_base_bdevs_discovered": 2, 00:12:24.433 "num_base_bdevs_operational": 4, 00:12:24.433 "base_bdevs_list": [ 00:12:24.433 { 00:12:24.433 "name": "BaseBdev1", 00:12:24.433 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:24.433 "is_configured": true, 00:12:24.433 "data_offset": 2048, 00:12:24.433 "data_size": 63488 00:12:24.433 }, 00:12:24.433 { 00:12:24.433 "name": null, 00:12:24.433 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:24.433 "is_configured": false, 00:12:24.433 "data_offset": 0, 00:12:24.433 "data_size": 63488 00:12:24.433 }, 00:12:24.433 { 00:12:24.433 "name": null, 00:12:24.433 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:24.433 "is_configured": false, 00:12:24.433 "data_offset": 0, 00:12:24.433 "data_size": 63488 00:12:24.433 }, 00:12:24.433 { 00:12:24.433 "name": "BaseBdev4", 00:12:24.433 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:24.433 "is_configured": true, 00:12:24.433 "data_offset": 2048, 00:12:24.433 "data_size": 63488 00:12:24.433 } 00:12:24.433 ] 00:12:24.433 }' 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.433 16:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.005 [2024-10-08 16:20:18.267203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.005 "name": "Existed_Raid", 00:12:25.005 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:25.005 "strip_size_kb": 64, 00:12:25.005 "state": "configuring", 00:12:25.005 "raid_level": "raid0", 00:12:25.005 "superblock": true, 00:12:25.005 "num_base_bdevs": 4, 00:12:25.005 "num_base_bdevs_discovered": 3, 00:12:25.005 "num_base_bdevs_operational": 4, 00:12:25.005 "base_bdevs_list": [ 00:12:25.005 { 00:12:25.005 "name": "BaseBdev1", 00:12:25.005 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:25.005 "is_configured": true, 00:12:25.005 "data_offset": 2048, 00:12:25.005 "data_size": 63488 00:12:25.005 }, 00:12:25.005 { 00:12:25.005 "name": null, 00:12:25.005 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:25.005 "is_configured": false, 00:12:25.005 "data_offset": 0, 00:12:25.005 "data_size": 63488 00:12:25.005 }, 00:12:25.005 { 00:12:25.005 "name": "BaseBdev3", 00:12:25.005 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:25.005 "is_configured": true, 00:12:25.005 "data_offset": 2048, 00:12:25.005 "data_size": 63488 00:12:25.005 }, 00:12:25.005 { 00:12:25.005 "name": "BaseBdev4", 00:12:25.005 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:25.005 "is_configured": true, 00:12:25.005 "data_offset": 2048, 00:12:25.005 "data_size": 63488 00:12:25.005 } 00:12:25.005 ] 00:12:25.005 }' 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.005 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.569 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.569 [2024-10-08 16:20:18.839352] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.830 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.830 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:25.830 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.830 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.830 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.831 "name": "Existed_Raid", 00:12:25.831 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:25.831 "strip_size_kb": 64, 00:12:25.831 "state": "configuring", 00:12:25.831 "raid_level": "raid0", 00:12:25.831 "superblock": true, 00:12:25.831 "num_base_bdevs": 4, 00:12:25.831 "num_base_bdevs_discovered": 2, 00:12:25.831 "num_base_bdevs_operational": 4, 00:12:25.831 "base_bdevs_list": [ 00:12:25.831 { 00:12:25.831 "name": null, 00:12:25.831 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:25.831 "is_configured": false, 00:12:25.831 "data_offset": 0, 00:12:25.831 "data_size": 63488 00:12:25.831 }, 00:12:25.831 { 00:12:25.831 "name": null, 00:12:25.831 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:25.831 "is_configured": false, 00:12:25.831 "data_offset": 0, 00:12:25.831 "data_size": 63488 00:12:25.831 }, 00:12:25.831 { 00:12:25.831 "name": "BaseBdev3", 00:12:25.831 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:25.831 "is_configured": true, 00:12:25.831 "data_offset": 2048, 00:12:25.831 "data_size": 63488 00:12:25.831 }, 00:12:25.831 { 00:12:25.831 "name": "BaseBdev4", 00:12:25.831 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:25.831 "is_configured": true, 00:12:25.831 "data_offset": 2048, 00:12:25.831 "data_size": 63488 00:12:25.831 } 00:12:25.831 ] 00:12:25.831 }' 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.831 16:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.396 [2024-10-08 16:20:19.487396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.396 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.397 "name": "Existed_Raid", 00:12:26.397 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:26.397 "strip_size_kb": 64, 00:12:26.397 "state": "configuring", 00:12:26.397 "raid_level": "raid0", 00:12:26.397 "superblock": true, 00:12:26.397 "num_base_bdevs": 4, 00:12:26.397 "num_base_bdevs_discovered": 3, 00:12:26.397 "num_base_bdevs_operational": 4, 00:12:26.397 "base_bdevs_list": [ 00:12:26.397 { 00:12:26.397 "name": null, 00:12:26.397 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:26.397 "is_configured": false, 00:12:26.397 "data_offset": 0, 00:12:26.397 "data_size": 63488 00:12:26.397 }, 00:12:26.397 { 00:12:26.397 "name": "BaseBdev2", 00:12:26.397 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:26.397 "is_configured": true, 00:12:26.397 "data_offset": 2048, 00:12:26.397 "data_size": 63488 00:12:26.397 }, 00:12:26.397 { 00:12:26.397 "name": "BaseBdev3", 00:12:26.397 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:26.397 "is_configured": true, 00:12:26.397 "data_offset": 2048, 00:12:26.397 "data_size": 63488 00:12:26.397 }, 00:12:26.397 { 00:12:26.397 "name": "BaseBdev4", 00:12:26.397 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:26.397 "is_configured": true, 00:12:26.397 "data_offset": 2048, 00:12:26.397 "data_size": 63488 00:12:26.397 } 00:12:26.397 ] 00:12:26.397 }' 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.397 16:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88a877f0-99a3-4e59-849a-f4852d5a44c6 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 [2024-10-08 16:20:20.182737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:26.964 [2024-10-08 16:20:20.183085] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:26.964 [2024-10-08 16:20:20.183104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:26.964 [2024-10-08 16:20:20.183437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:26.964 [2024-10-08 16:20:20.183653] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:26.964 [2024-10-08 16:20:20.183677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:26.964 NewBaseBdev 00:12:26.964 [2024-10-08 16:20:20.183839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 [ 00:12:26.964 { 00:12:26.964 "name": "NewBaseBdev", 00:12:26.964 "aliases": [ 00:12:26.964 "88a877f0-99a3-4e59-849a-f4852d5a44c6" 00:12:26.964 ], 00:12:26.964 "product_name": "Malloc disk", 00:12:26.964 "block_size": 512, 00:12:26.964 "num_blocks": 65536, 00:12:26.964 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:26.964 "assigned_rate_limits": { 00:12:26.964 "rw_ios_per_sec": 0, 00:12:26.964 "rw_mbytes_per_sec": 0, 00:12:26.964 "r_mbytes_per_sec": 0, 00:12:26.964 "w_mbytes_per_sec": 0 00:12:26.964 }, 00:12:26.964 "claimed": true, 00:12:26.964 "claim_type": "exclusive_write", 00:12:26.964 "zoned": false, 00:12:26.964 "supported_io_types": { 00:12:26.964 "read": true, 00:12:26.964 "write": true, 00:12:26.964 "unmap": true, 00:12:26.964 "flush": true, 00:12:26.964 "reset": true, 00:12:26.964 "nvme_admin": false, 00:12:26.964 "nvme_io": false, 00:12:26.964 "nvme_io_md": false, 00:12:26.964 "write_zeroes": true, 00:12:26.964 "zcopy": true, 00:12:26.964 "get_zone_info": false, 00:12:26.964 "zone_management": false, 00:12:26.964 "zone_append": false, 00:12:26.964 "compare": false, 00:12:26.964 "compare_and_write": false, 00:12:26.964 "abort": true, 00:12:26.964 "seek_hole": false, 00:12:26.964 "seek_data": false, 00:12:26.964 "copy": true, 00:12:26.964 "nvme_iov_md": false 00:12:26.964 }, 00:12:26.964 "memory_domains": [ 00:12:26.964 { 00:12:26.964 "dma_device_id": "system", 00:12:26.964 "dma_device_type": 1 00:12:26.964 }, 00:12:26.964 { 00:12:26.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.964 "dma_device_type": 2 00:12:26.964 } 00:12:26.964 ], 00:12:26.964 "driver_specific": {} 00:12:26.964 } 00:12:26.964 ] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.964 "name": "Existed_Raid", 00:12:26.964 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:26.964 "strip_size_kb": 64, 00:12:26.964 "state": "online", 00:12:26.964 "raid_level": "raid0", 00:12:26.964 "superblock": true, 00:12:26.964 "num_base_bdevs": 4, 00:12:26.964 "num_base_bdevs_discovered": 4, 00:12:26.964 "num_base_bdevs_operational": 4, 00:12:26.964 "base_bdevs_list": [ 00:12:26.964 { 00:12:26.964 "name": "NewBaseBdev", 00:12:26.964 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:26.964 "is_configured": true, 00:12:26.964 "data_offset": 2048, 00:12:26.964 "data_size": 63488 00:12:26.964 }, 00:12:26.964 { 00:12:26.964 "name": "BaseBdev2", 00:12:26.964 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:26.964 "is_configured": true, 00:12:26.964 "data_offset": 2048, 00:12:26.964 "data_size": 63488 00:12:26.964 }, 00:12:26.964 { 00:12:26.964 "name": "BaseBdev3", 00:12:26.964 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:26.964 "is_configured": true, 00:12:26.964 "data_offset": 2048, 00:12:26.964 "data_size": 63488 00:12:26.964 }, 00:12:26.964 { 00:12:26.964 "name": "BaseBdev4", 00:12:26.964 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:26.964 "is_configured": true, 00:12:26.964 "data_offset": 2048, 00:12:26.964 "data_size": 63488 00:12:26.964 } 00:12:26.964 ] 00:12:26.964 }' 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.964 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.532 [2024-10-08 16:20:20.763469] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.532 "name": "Existed_Raid", 00:12:27.532 "aliases": [ 00:12:27.532 "d8dae68d-b211-4dd3-87b6-71a629ed1f07" 00:12:27.532 ], 00:12:27.532 "product_name": "Raid Volume", 00:12:27.532 "block_size": 512, 00:12:27.532 "num_blocks": 253952, 00:12:27.532 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:27.532 "assigned_rate_limits": { 00:12:27.532 "rw_ios_per_sec": 0, 00:12:27.532 "rw_mbytes_per_sec": 0, 00:12:27.532 "r_mbytes_per_sec": 0, 00:12:27.532 "w_mbytes_per_sec": 0 00:12:27.532 }, 00:12:27.532 "claimed": false, 00:12:27.532 "zoned": false, 00:12:27.532 "supported_io_types": { 00:12:27.532 "read": true, 00:12:27.532 "write": true, 00:12:27.532 "unmap": true, 00:12:27.532 "flush": true, 00:12:27.532 "reset": true, 00:12:27.532 "nvme_admin": false, 00:12:27.532 "nvme_io": false, 00:12:27.532 "nvme_io_md": false, 00:12:27.532 "write_zeroes": true, 00:12:27.532 "zcopy": false, 00:12:27.532 "get_zone_info": false, 00:12:27.532 "zone_management": false, 00:12:27.532 "zone_append": false, 00:12:27.532 "compare": false, 00:12:27.532 "compare_and_write": false, 00:12:27.532 "abort": false, 00:12:27.532 "seek_hole": false, 00:12:27.532 "seek_data": false, 00:12:27.532 "copy": false, 00:12:27.532 "nvme_iov_md": false 00:12:27.532 }, 00:12:27.532 "memory_domains": [ 00:12:27.532 { 00:12:27.532 "dma_device_id": "system", 00:12:27.532 "dma_device_type": 1 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.532 "dma_device_type": 2 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "system", 00:12:27.532 "dma_device_type": 1 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.532 "dma_device_type": 2 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "system", 00:12:27.532 "dma_device_type": 1 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.532 "dma_device_type": 2 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "system", 00:12:27.532 "dma_device_type": 1 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.532 "dma_device_type": 2 00:12:27.532 } 00:12:27.532 ], 00:12:27.532 "driver_specific": { 00:12:27.532 "raid": { 00:12:27.532 "uuid": "d8dae68d-b211-4dd3-87b6-71a629ed1f07", 00:12:27.532 "strip_size_kb": 64, 00:12:27.532 "state": "online", 00:12:27.532 "raid_level": "raid0", 00:12:27.532 "superblock": true, 00:12:27.532 "num_base_bdevs": 4, 00:12:27.532 "num_base_bdevs_discovered": 4, 00:12:27.532 "num_base_bdevs_operational": 4, 00:12:27.532 "base_bdevs_list": [ 00:12:27.532 { 00:12:27.532 "name": "NewBaseBdev", 00:12:27.532 "uuid": "88a877f0-99a3-4e59-849a-f4852d5a44c6", 00:12:27.532 "is_configured": true, 00:12:27.532 "data_offset": 2048, 00:12:27.532 "data_size": 63488 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "name": "BaseBdev2", 00:12:27.532 "uuid": "a6e76694-2e63-4127-a74a-4ac3bebdfe1e", 00:12:27.532 "is_configured": true, 00:12:27.532 "data_offset": 2048, 00:12:27.532 "data_size": 63488 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "name": "BaseBdev3", 00:12:27.532 "uuid": "28b88576-7c79-4eaf-b56a-ecb55a556748", 00:12:27.532 "is_configured": true, 00:12:27.532 "data_offset": 2048, 00:12:27.532 "data_size": 63488 00:12:27.532 }, 00:12:27.532 { 00:12:27.532 "name": "BaseBdev4", 00:12:27.532 "uuid": "de129351-dd33-4c63-adb9-a953b3ea8ea7", 00:12:27.532 "is_configured": true, 00:12:27.532 "data_offset": 2048, 00:12:27.532 "data_size": 63488 00:12:27.532 } 00:12:27.532 ] 00:12:27.532 } 00:12:27.532 } 00:12:27.532 }' 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.532 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:27.532 BaseBdev2 00:12:27.532 BaseBdev3 00:12:27.532 BaseBdev4' 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.792 16:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.792 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.051 [2024-10-08 16:20:21.123116] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.051 [2024-10-08 16:20:21.123190] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.051 [2024-10-08 16:20:21.123322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.051 [2024-10-08 16:20:21.123427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.051 [2024-10-08 16:20:21.123446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70471 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70471 ']' 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70471 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70471 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.051 killing process with pid 70471 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70471' 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70471 00:12:28.051 [2024-10-08 16:20:21.161165] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.051 16:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70471 00:12:28.310 [2024-10-08 16:20:21.556427] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.747 16:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:29.747 00:12:29.747 real 0m13.580s 00:12:29.747 user 0m22.103s 00:12:29.747 sys 0m2.014s 00:12:29.747 16:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.747 16:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.748 ************************************ 00:12:29.748 END TEST raid_state_function_test_sb 00:12:29.748 ************************************ 00:12:29.748 16:20:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:29.748 16:20:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:29.748 16:20:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.748 16:20:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.748 ************************************ 00:12:29.748 START TEST raid_superblock_test 00:12:29.748 ************************************ 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71164 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71164 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71164 ']' 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.748 16:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.006 [2024-10-08 16:20:23.074142] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:30.006 [2024-10-08 16:20:23.074299] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:12:30.006 [2024-10-08 16:20:23.244239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.265 [2024-10-08 16:20:23.560370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.524 [2024-10-08 16:20:23.769321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.524 [2024-10-08 16:20:23.769390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 malloc1 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 [2024-10-08 16:20:24.217153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.092 [2024-10-08 16:20:24.217298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.092 [2024-10-08 16:20:24.217340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:31.092 [2024-10-08 16:20:24.217363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.092 [2024-10-08 16:20:24.220251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.092 [2024-10-08 16:20:24.220296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.092 pt1 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 malloc2 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 [2024-10-08 16:20:24.291402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.092 [2024-10-08 16:20:24.291494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.092 [2024-10-08 16:20:24.291555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:31.092 [2024-10-08 16:20:24.291578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.092 [2024-10-08 16:20:24.294494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.092 [2024-10-08 16:20:24.294555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.092 pt2 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 malloc3 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 [2024-10-08 16:20:24.348825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.092 [2024-10-08 16:20:24.349113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.092 [2024-10-08 16:20:24.349165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:31.092 [2024-10-08 16:20:24.349185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.092 [2024-10-08 16:20:24.352165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.092 [2024-10-08 16:20:24.352362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.092 pt3 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.092 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 malloc4 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.093 [2024-10-08 16:20:24.406704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:31.093 [2024-10-08 16:20:24.406783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.093 [2024-10-08 16:20:24.406819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:31.093 [2024-10-08 16:20:24.406837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.093 [2024-10-08 16:20:24.409663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.093 [2024-10-08 16:20:24.409714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:31.093 pt4 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.093 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.351 [2024-10-08 16:20:24.418777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.351 [2024-10-08 16:20:24.421211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.351 [2024-10-08 16:20:24.421457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.351 [2024-10-08 16:20:24.421601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:31.351 [2024-10-08 16:20:24.421875] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:31.351 [2024-10-08 16:20:24.421904] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:31.351 [2024-10-08 16:20:24.422281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.351 [2024-10-08 16:20:24.422516] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:31.351 [2024-10-08 16:20:24.422542] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:31.351 [2024-10-08 16:20:24.422812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.351 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.352 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.352 "name": "raid_bdev1", 00:12:31.352 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:31.352 "strip_size_kb": 64, 00:12:31.352 "state": "online", 00:12:31.352 "raid_level": "raid0", 00:12:31.352 "superblock": true, 00:12:31.352 "num_base_bdevs": 4, 00:12:31.352 "num_base_bdevs_discovered": 4, 00:12:31.352 "num_base_bdevs_operational": 4, 00:12:31.352 "base_bdevs_list": [ 00:12:31.352 { 00:12:31.352 "name": "pt1", 00:12:31.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.352 "is_configured": true, 00:12:31.352 "data_offset": 2048, 00:12:31.352 "data_size": 63488 00:12:31.352 }, 00:12:31.352 { 00:12:31.352 "name": "pt2", 00:12:31.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.352 "is_configured": true, 00:12:31.352 "data_offset": 2048, 00:12:31.352 "data_size": 63488 00:12:31.352 }, 00:12:31.352 { 00:12:31.352 "name": "pt3", 00:12:31.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.352 "is_configured": true, 00:12:31.352 "data_offset": 2048, 00:12:31.352 "data_size": 63488 00:12:31.352 }, 00:12:31.352 { 00:12:31.352 "name": "pt4", 00:12:31.352 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.352 "is_configured": true, 00:12:31.352 "data_offset": 2048, 00:12:31.352 "data_size": 63488 00:12:31.352 } 00:12:31.352 ] 00:12:31.352 }' 00:12:31.352 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.352 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.934 [2024-10-08 16:20:24.963365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.934 16:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.934 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.934 "name": "raid_bdev1", 00:12:31.934 "aliases": [ 00:12:31.934 "928b09ec-6ebf-43b0-8030-ac317c1e68a8" 00:12:31.934 ], 00:12:31.934 "product_name": "Raid Volume", 00:12:31.934 "block_size": 512, 00:12:31.934 "num_blocks": 253952, 00:12:31.934 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:31.934 "assigned_rate_limits": { 00:12:31.934 "rw_ios_per_sec": 0, 00:12:31.934 "rw_mbytes_per_sec": 0, 00:12:31.934 "r_mbytes_per_sec": 0, 00:12:31.934 "w_mbytes_per_sec": 0 00:12:31.934 }, 00:12:31.934 "claimed": false, 00:12:31.934 "zoned": false, 00:12:31.934 "supported_io_types": { 00:12:31.934 "read": true, 00:12:31.934 "write": true, 00:12:31.934 "unmap": true, 00:12:31.934 "flush": true, 00:12:31.934 "reset": true, 00:12:31.934 "nvme_admin": false, 00:12:31.934 "nvme_io": false, 00:12:31.934 "nvme_io_md": false, 00:12:31.934 "write_zeroes": true, 00:12:31.934 "zcopy": false, 00:12:31.934 "get_zone_info": false, 00:12:31.934 "zone_management": false, 00:12:31.934 "zone_append": false, 00:12:31.934 "compare": false, 00:12:31.934 "compare_and_write": false, 00:12:31.934 "abort": false, 00:12:31.934 "seek_hole": false, 00:12:31.934 "seek_data": false, 00:12:31.934 "copy": false, 00:12:31.934 "nvme_iov_md": false 00:12:31.934 }, 00:12:31.934 "memory_domains": [ 00:12:31.934 { 00:12:31.934 "dma_device_id": "system", 00:12:31.934 "dma_device_type": 1 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.934 "dma_device_type": 2 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "system", 00:12:31.934 "dma_device_type": 1 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.934 "dma_device_type": 2 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "system", 00:12:31.934 "dma_device_type": 1 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.934 "dma_device_type": 2 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "system", 00:12:31.934 "dma_device_type": 1 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.934 "dma_device_type": 2 00:12:31.934 } 00:12:31.934 ], 00:12:31.934 "driver_specific": { 00:12:31.934 "raid": { 00:12:31.934 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:31.934 "strip_size_kb": 64, 00:12:31.934 "state": "online", 00:12:31.934 "raid_level": "raid0", 00:12:31.935 "superblock": true, 00:12:31.935 "num_base_bdevs": 4, 00:12:31.935 "num_base_bdevs_discovered": 4, 00:12:31.935 "num_base_bdevs_operational": 4, 00:12:31.935 "base_bdevs_list": [ 00:12:31.935 { 00:12:31.935 "name": "pt1", 00:12:31.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.935 "is_configured": true, 00:12:31.935 "data_offset": 2048, 00:12:31.935 "data_size": 63488 00:12:31.935 }, 00:12:31.935 { 00:12:31.935 "name": "pt2", 00:12:31.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.935 "is_configured": true, 00:12:31.935 "data_offset": 2048, 00:12:31.935 "data_size": 63488 00:12:31.935 }, 00:12:31.935 { 00:12:31.935 "name": "pt3", 00:12:31.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.935 "is_configured": true, 00:12:31.935 "data_offset": 2048, 00:12:31.935 "data_size": 63488 00:12:31.935 }, 00:12:31.935 { 00:12:31.935 "name": "pt4", 00:12:31.935 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.935 "is_configured": true, 00:12:31.935 "data_offset": 2048, 00:12:31.935 "data_size": 63488 00:12:31.935 } 00:12:31.935 ] 00:12:31.935 } 00:12:31.935 } 00:12:31.935 }' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:31.935 pt2 00:12:31.935 pt3 00:12:31.935 pt4' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.935 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 [2024-10-08 16:20:25.347422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.193 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=928b09ec-6ebf-43b0-8030-ac317c1e68a8 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 928b09ec-6ebf-43b0-8030-ac317c1e68a8 ']' 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 [2024-10-08 16:20:25.395054] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.194 [2024-10-08 16:20:25.395096] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.194 [2024-10-08 16:20:25.395249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.194 [2024-10-08 16:20:25.395365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.194 [2024-10-08 16:20:25.395394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.194 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 [2024-10-08 16:20:25.555142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:32.453 [2024-10-08 16:20:25.557829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:32.453 [2024-10-08 16:20:25.557924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:32.453 [2024-10-08 16:20:25.558004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:32.453 [2024-10-08 16:20:25.558121] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:32.453 [2024-10-08 16:20:25.558210] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:32.453 [2024-10-08 16:20:25.558251] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:32.453 [2024-10-08 16:20:25.558291] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:32.453 [2024-10-08 16:20:25.558318] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.453 [2024-10-08 16:20:25.558338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:32.453 request: 00:12:32.453 { 00:12:32.453 "name": "raid_bdev1", 00:12:32.453 "raid_level": "raid0", 00:12:32.453 "base_bdevs": [ 00:12:32.453 "malloc1", 00:12:32.453 "malloc2", 00:12:32.453 "malloc3", 00:12:32.453 "malloc4" 00:12:32.453 ], 00:12:32.453 "strip_size_kb": 64, 00:12:32.453 "superblock": false, 00:12:32.453 "method": "bdev_raid_create", 00:12:32.453 "req_id": 1 00:12:32.453 } 00:12:32.453 Got JSON-RPC error response 00:12:32.453 response: 00:12:32.453 { 00:12:32.453 "code": -17, 00:12:32.453 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:32.453 } 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 [2024-10-08 16:20:25.627175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:32.453 [2024-10-08 16:20:25.627424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.453 [2024-10-08 16:20:25.627505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:32.454 [2024-10-08 16:20:25.627755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.454 [2024-10-08 16:20:25.630827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.454 [2024-10-08 16:20:25.631007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:32.454 [2024-10-08 16:20:25.631246] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:32.454 [2024-10-08 16:20:25.631468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:32.454 pt1 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.454 "name": "raid_bdev1", 00:12:32.454 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:32.454 "strip_size_kb": 64, 00:12:32.454 "state": "configuring", 00:12:32.454 "raid_level": "raid0", 00:12:32.454 "superblock": true, 00:12:32.454 "num_base_bdevs": 4, 00:12:32.454 "num_base_bdevs_discovered": 1, 00:12:32.454 "num_base_bdevs_operational": 4, 00:12:32.454 "base_bdevs_list": [ 00:12:32.454 { 00:12:32.454 "name": "pt1", 00:12:32.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.454 "is_configured": true, 00:12:32.454 "data_offset": 2048, 00:12:32.454 "data_size": 63488 00:12:32.454 }, 00:12:32.454 { 00:12:32.454 "name": null, 00:12:32.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.454 "is_configured": false, 00:12:32.454 "data_offset": 2048, 00:12:32.454 "data_size": 63488 00:12:32.454 }, 00:12:32.454 { 00:12:32.454 "name": null, 00:12:32.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.454 "is_configured": false, 00:12:32.454 "data_offset": 2048, 00:12:32.454 "data_size": 63488 00:12:32.454 }, 00:12:32.454 { 00:12:32.454 "name": null, 00:12:32.454 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.454 "is_configured": false, 00:12:32.454 "data_offset": 2048, 00:12:32.454 "data_size": 63488 00:12:32.454 } 00:12:32.454 ] 00:12:32.454 }' 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.454 16:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.022 [2024-10-08 16:20:26.139531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.022 [2024-10-08 16:20:26.139632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.022 [2024-10-08 16:20:26.139666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:33.022 [2024-10-08 16:20:26.139688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.022 [2024-10-08 16:20:26.140320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.022 [2024-10-08 16:20:26.140376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.022 [2024-10-08 16:20:26.140514] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.022 [2024-10-08 16:20:26.140585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.022 pt2 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.022 [2024-10-08 16:20:26.151545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.022 "name": "raid_bdev1", 00:12:33.022 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:33.022 "strip_size_kb": 64, 00:12:33.022 "state": "configuring", 00:12:33.022 "raid_level": "raid0", 00:12:33.022 "superblock": true, 00:12:33.022 "num_base_bdevs": 4, 00:12:33.022 "num_base_bdevs_discovered": 1, 00:12:33.022 "num_base_bdevs_operational": 4, 00:12:33.022 "base_bdevs_list": [ 00:12:33.022 { 00:12:33.022 "name": "pt1", 00:12:33.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.022 "is_configured": true, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 }, 00:12:33.022 { 00:12:33.022 "name": null, 00:12:33.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.022 "is_configured": false, 00:12:33.022 "data_offset": 0, 00:12:33.022 "data_size": 63488 00:12:33.022 }, 00:12:33.022 { 00:12:33.022 "name": null, 00:12:33.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.022 "is_configured": false, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 }, 00:12:33.022 { 00:12:33.022 "name": null, 00:12:33.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.022 "is_configured": false, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 } 00:12:33.022 ] 00:12:33.022 }' 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.022 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.588 [2024-10-08 16:20:26.683704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.588 [2024-10-08 16:20:26.683820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.588 [2024-10-08 16:20:26.683869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:33.588 [2024-10-08 16:20:26.683894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.588 [2024-10-08 16:20:26.684551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.588 [2024-10-08 16:20:26.684582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.588 [2024-10-08 16:20:26.684727] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.588 [2024-10-08 16:20:26.684767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.588 pt2 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.588 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.589 [2024-10-08 16:20:26.691668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:33.589 [2024-10-08 16:20:26.691743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.589 [2024-10-08 16:20:26.691786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:33.589 [2024-10-08 16:20:26.691805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.589 [2024-10-08 16:20:26.692275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.589 [2024-10-08 16:20:26.692311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:33.589 [2024-10-08 16:20:26.692401] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:33.589 [2024-10-08 16:20:26.692434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:33.589 pt3 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.589 [2024-10-08 16:20:26.699640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:33.589 [2024-10-08 16:20:26.699708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.589 [2024-10-08 16:20:26.699742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:33.589 [2024-10-08 16:20:26.699760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.589 [2024-10-08 16:20:26.700233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.589 [2024-10-08 16:20:26.700269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:33.589 [2024-10-08 16:20:26.700359] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:33.589 [2024-10-08 16:20:26.700391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:33.589 [2024-10-08 16:20:26.700607] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.589 [2024-10-08 16:20:26.700626] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:33.589 [2024-10-08 16:20:26.700978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:33.589 [2024-10-08 16:20:26.701195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.589 [2024-10-08 16:20:26.701229] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:33.589 [2024-10-08 16:20:26.701471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.589 pt4 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.589 "name": "raid_bdev1", 00:12:33.589 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:33.589 "strip_size_kb": 64, 00:12:33.589 "state": "online", 00:12:33.589 "raid_level": "raid0", 00:12:33.589 "superblock": true, 00:12:33.589 "num_base_bdevs": 4, 00:12:33.589 "num_base_bdevs_discovered": 4, 00:12:33.589 "num_base_bdevs_operational": 4, 00:12:33.589 "base_bdevs_list": [ 00:12:33.589 { 00:12:33.589 "name": "pt1", 00:12:33.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.589 "is_configured": true, 00:12:33.589 "data_offset": 2048, 00:12:33.589 "data_size": 63488 00:12:33.589 }, 00:12:33.589 { 00:12:33.589 "name": "pt2", 00:12:33.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.589 "is_configured": true, 00:12:33.589 "data_offset": 2048, 00:12:33.589 "data_size": 63488 00:12:33.589 }, 00:12:33.589 { 00:12:33.589 "name": "pt3", 00:12:33.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.589 "is_configured": true, 00:12:33.589 "data_offset": 2048, 00:12:33.589 "data_size": 63488 00:12:33.589 }, 00:12:33.589 { 00:12:33.589 "name": "pt4", 00:12:33.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.589 "is_configured": true, 00:12:33.589 "data_offset": 2048, 00:12:33.589 "data_size": 63488 00:12:33.589 } 00:12:33.589 ] 00:12:33.589 }' 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.589 16:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.157 [2024-10-08 16:20:27.264948] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.157 "name": "raid_bdev1", 00:12:34.157 "aliases": [ 00:12:34.157 "928b09ec-6ebf-43b0-8030-ac317c1e68a8" 00:12:34.157 ], 00:12:34.157 "product_name": "Raid Volume", 00:12:34.157 "block_size": 512, 00:12:34.157 "num_blocks": 253952, 00:12:34.157 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:34.157 "assigned_rate_limits": { 00:12:34.157 "rw_ios_per_sec": 0, 00:12:34.157 "rw_mbytes_per_sec": 0, 00:12:34.157 "r_mbytes_per_sec": 0, 00:12:34.157 "w_mbytes_per_sec": 0 00:12:34.157 }, 00:12:34.157 "claimed": false, 00:12:34.157 "zoned": false, 00:12:34.157 "supported_io_types": { 00:12:34.157 "read": true, 00:12:34.157 "write": true, 00:12:34.157 "unmap": true, 00:12:34.157 "flush": true, 00:12:34.157 "reset": true, 00:12:34.157 "nvme_admin": false, 00:12:34.157 "nvme_io": false, 00:12:34.157 "nvme_io_md": false, 00:12:34.157 "write_zeroes": true, 00:12:34.157 "zcopy": false, 00:12:34.157 "get_zone_info": false, 00:12:34.157 "zone_management": false, 00:12:34.157 "zone_append": false, 00:12:34.157 "compare": false, 00:12:34.157 "compare_and_write": false, 00:12:34.157 "abort": false, 00:12:34.157 "seek_hole": false, 00:12:34.157 "seek_data": false, 00:12:34.157 "copy": false, 00:12:34.157 "nvme_iov_md": false 00:12:34.157 }, 00:12:34.157 "memory_domains": [ 00:12:34.157 { 00:12:34.157 "dma_device_id": "system", 00:12:34.157 "dma_device_type": 1 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.157 "dma_device_type": 2 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "system", 00:12:34.157 "dma_device_type": 1 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.157 "dma_device_type": 2 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "system", 00:12:34.157 "dma_device_type": 1 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.157 "dma_device_type": 2 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "system", 00:12:34.157 "dma_device_type": 1 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.157 "dma_device_type": 2 00:12:34.157 } 00:12:34.157 ], 00:12:34.157 "driver_specific": { 00:12:34.157 "raid": { 00:12:34.157 "uuid": "928b09ec-6ebf-43b0-8030-ac317c1e68a8", 00:12:34.157 "strip_size_kb": 64, 00:12:34.157 "state": "online", 00:12:34.157 "raid_level": "raid0", 00:12:34.157 "superblock": true, 00:12:34.157 "num_base_bdevs": 4, 00:12:34.157 "num_base_bdevs_discovered": 4, 00:12:34.157 "num_base_bdevs_operational": 4, 00:12:34.157 "base_bdevs_list": [ 00:12:34.157 { 00:12:34.157 "name": "pt1", 00:12:34.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.157 "is_configured": true, 00:12:34.157 "data_offset": 2048, 00:12:34.157 "data_size": 63488 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "name": "pt2", 00:12:34.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.157 "is_configured": true, 00:12:34.157 "data_offset": 2048, 00:12:34.157 "data_size": 63488 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "name": "pt3", 00:12:34.157 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.157 "is_configured": true, 00:12:34.157 "data_offset": 2048, 00:12:34.157 "data_size": 63488 00:12:34.157 }, 00:12:34.157 { 00:12:34.157 "name": "pt4", 00:12:34.157 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.157 "is_configured": true, 00:12:34.157 "data_offset": 2048, 00:12:34.157 "data_size": 63488 00:12:34.157 } 00:12:34.157 ] 00:12:34.157 } 00:12:34.157 } 00:12:34.157 }' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:34.157 pt2 00:12:34.157 pt3 00:12:34.157 pt4' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.157 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.158 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:34.158 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.158 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.417 [2024-10-08 16:20:27.640925] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 928b09ec-6ebf-43b0-8030-ac317c1e68a8 '!=' 928b09ec-6ebf-43b0-8030-ac317c1e68a8 ']' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71164 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71164 ']' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71164 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.417 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71164 00:12:34.681 killing process with pid 71164 00:12:34.681 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.681 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.681 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71164' 00:12:34.681 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 71164 00:12:34.681 [2024-10-08 16:20:27.746409] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.681 16:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 71164 00:12:34.681 [2024-10-08 16:20:27.746531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.681 [2024-10-08 16:20:27.746655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.681 [2024-10-08 16:20:27.746675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:34.939 [2024-10-08 16:20:28.111860] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.323 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:36.323 00:12:36.323 real 0m6.333s 00:12:36.323 user 0m9.403s 00:12:36.323 sys 0m0.975s 00:12:36.323 ************************************ 00:12:36.323 END TEST raid_superblock_test 00:12:36.323 ************************************ 00:12:36.323 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.323 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 16:20:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:36.323 16:20:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:36.323 16:20:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.323 16:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 ************************************ 00:12:36.323 START TEST raid_read_error_test 00:12:36.323 ************************************ 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.R2qZQ7hzVi 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71430 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71430 00:12:36.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71430 ']' 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.323 16:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 [2024-10-08 16:20:29.502791] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:36.323 [2024-10-08 16:20:29.503603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71430 ] 00:12:36.581 [2024-10-08 16:20:29.680149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.839 [2024-10-08 16:20:29.919193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.839 [2024-10-08 16:20:30.117855] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.839 [2024-10-08 16:20:30.117946] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.406 BaseBdev1_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.406 true 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.406 [2024-10-08 16:20:30.611077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:37.406 [2024-10-08 16:20:30.611160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.406 [2024-10-08 16:20:30.611187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:37.406 [2024-10-08 16:20:30.611205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.406 [2024-10-08 16:20:30.613963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.406 [2024-10-08 16:20:30.614236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.406 BaseBdev1 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.406 BaseBdev2_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.406 true 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.406 [2024-10-08 16:20:30.684114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:37.406 [2024-10-08 16:20:30.684210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.406 [2024-10-08 16:20:30.684236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:37.406 [2024-10-08 16:20:30.684253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.406 [2024-10-08 16:20:30.687047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.406 [2024-10-08 16:20:30.687115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.406 BaseBdev2 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.406 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 BaseBdev3_malloc 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 true 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 [2024-10-08 16:20:30.748324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:37.666 [2024-10-08 16:20:30.748411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.666 [2024-10-08 16:20:30.748442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:37.666 [2024-10-08 16:20:30.748462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.666 [2024-10-08 16:20:30.751449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.666 [2024-10-08 16:20:30.751500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:37.666 BaseBdev3 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 BaseBdev4_malloc 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 true 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 [2024-10-08 16:20:30.809763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:37.666 [2024-10-08 16:20:30.809848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.666 [2024-10-08 16:20:30.809879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:37.666 [2024-10-08 16:20:30.809901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.666 [2024-10-08 16:20:30.812928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.666 [2024-10-08 16:20:30.812995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:37.666 BaseBdev4 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 [2024-10-08 16:20:30.822009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.666 [2024-10-08 16:20:30.824521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.666 [2024-10-08 16:20:30.824677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.666 [2024-10-08 16:20:30.824772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.666 [2024-10-08 16:20:30.825074] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:37.666 [2024-10-08 16:20:30.825105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:37.666 [2024-10-08 16:20:30.825470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:37.666 [2024-10-08 16:20:30.825724] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:37.666 [2024-10-08 16:20:30.825741] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:37.666 [2024-10-08 16:20:30.826036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.666 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.666 "name": "raid_bdev1", 00:12:37.666 "uuid": "f10dd35a-0b94-4ad6-8547-1e55ea2c93c7", 00:12:37.666 "strip_size_kb": 64, 00:12:37.666 "state": "online", 00:12:37.666 "raid_level": "raid0", 00:12:37.667 "superblock": true, 00:12:37.667 "num_base_bdevs": 4, 00:12:37.667 "num_base_bdevs_discovered": 4, 00:12:37.667 "num_base_bdevs_operational": 4, 00:12:37.667 "base_bdevs_list": [ 00:12:37.667 { 00:12:37.667 "name": "BaseBdev1", 00:12:37.667 "uuid": "7fab259d-0585-5e9b-a236-ccfd7dcbc317", 00:12:37.667 "is_configured": true, 00:12:37.667 "data_offset": 2048, 00:12:37.667 "data_size": 63488 00:12:37.667 }, 00:12:37.667 { 00:12:37.667 "name": "BaseBdev2", 00:12:37.667 "uuid": "8545842e-3b75-5a63-be73-c166c35380db", 00:12:37.667 "is_configured": true, 00:12:37.667 "data_offset": 2048, 00:12:37.667 "data_size": 63488 00:12:37.667 }, 00:12:37.667 { 00:12:37.667 "name": "BaseBdev3", 00:12:37.667 "uuid": "1b4312f3-c70d-5684-952a-580464685541", 00:12:37.667 "is_configured": true, 00:12:37.667 "data_offset": 2048, 00:12:37.667 "data_size": 63488 00:12:37.667 }, 00:12:37.667 { 00:12:37.667 "name": "BaseBdev4", 00:12:37.667 "uuid": "a82d0745-084c-5df7-b443-2e1b4386f27e", 00:12:37.667 "is_configured": true, 00:12:37.667 "data_offset": 2048, 00:12:37.667 "data_size": 63488 00:12:37.667 } 00:12:37.667 ] 00:12:37.667 }' 00:12:37.667 16:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.667 16:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.233 16:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:38.233 16:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:38.233 [2024-10-08 16:20:31.503451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.167 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.167 "name": "raid_bdev1", 00:12:39.167 "uuid": "f10dd35a-0b94-4ad6-8547-1e55ea2c93c7", 00:12:39.167 "strip_size_kb": 64, 00:12:39.167 "state": "online", 00:12:39.167 "raid_level": "raid0", 00:12:39.167 "superblock": true, 00:12:39.167 "num_base_bdevs": 4, 00:12:39.167 "num_base_bdevs_discovered": 4, 00:12:39.167 "num_base_bdevs_operational": 4, 00:12:39.167 "base_bdevs_list": [ 00:12:39.167 { 00:12:39.167 "name": "BaseBdev1", 00:12:39.167 "uuid": "7fab259d-0585-5e9b-a236-ccfd7dcbc317", 00:12:39.167 "is_configured": true, 00:12:39.167 "data_offset": 2048, 00:12:39.167 "data_size": 63488 00:12:39.167 }, 00:12:39.167 { 00:12:39.167 "name": "BaseBdev2", 00:12:39.167 "uuid": "8545842e-3b75-5a63-be73-c166c35380db", 00:12:39.167 "is_configured": true, 00:12:39.167 "data_offset": 2048, 00:12:39.167 "data_size": 63488 00:12:39.167 }, 00:12:39.167 { 00:12:39.167 "name": "BaseBdev3", 00:12:39.167 "uuid": "1b4312f3-c70d-5684-952a-580464685541", 00:12:39.167 "is_configured": true, 00:12:39.167 "data_offset": 2048, 00:12:39.167 "data_size": 63488 00:12:39.167 }, 00:12:39.167 { 00:12:39.167 "name": "BaseBdev4", 00:12:39.167 "uuid": "a82d0745-084c-5df7-b443-2e1b4386f27e", 00:12:39.167 "is_configured": true, 00:12:39.168 "data_offset": 2048, 00:12:39.168 "data_size": 63488 00:12:39.168 } 00:12:39.168 ] 00:12:39.168 }' 00:12:39.168 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.168 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.753 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.754 [2024-10-08 16:20:32.902684] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.754 [2024-10-08 16:20:32.902936] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.754 [2024-10-08 16:20:32.906384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.754 [2024-10-08 16:20:32.906603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.754 [2024-10-08 16:20:32.906678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.754 [2024-10-08 16:20:32.906699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:39.754 { 00:12:39.754 "results": [ 00:12:39.754 { 00:12:39.754 "job": "raid_bdev1", 00:12:39.754 "core_mask": "0x1", 00:12:39.754 "workload": "randrw", 00:12:39.754 "percentage": 50, 00:12:39.754 "status": "finished", 00:12:39.754 "queue_depth": 1, 00:12:39.754 "io_size": 131072, 00:12:39.754 "runtime": 1.396591, 00:12:39.754 "iops": 11042.603024077915, 00:12:39.754 "mibps": 1380.3253780097393, 00:12:39.754 "io_failed": 1, 00:12:39.754 "io_timeout": 0, 00:12:39.754 "avg_latency_us": 126.65756668022375, 00:12:39.754 "min_latency_us": 38.4, 00:12:39.754 "max_latency_us": 1779.898181818182 00:12:39.754 } 00:12:39.754 ], 00:12:39.754 "core_count": 1 00:12:39.754 } 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71430 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71430 ']' 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71430 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71430 00:12:39.754 killing process with pid 71430 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71430' 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71430 00:12:39.754 [2024-10-08 16:20:32.941686] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.754 16:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71430 00:12:40.012 [2024-10-08 16:20:33.230730] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.388 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.R2qZQ7hzVi 00:12:41.388 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:41.388 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:41.388 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:41.388 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:41.388 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:41.389 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:41.389 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:41.389 00:12:41.389 real 0m5.084s 00:12:41.389 user 0m6.222s 00:12:41.389 sys 0m0.678s 00:12:41.389 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.389 ************************************ 00:12:41.389 END TEST raid_read_error_test 00:12:41.389 ************************************ 00:12:41.389 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.389 16:20:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:41.389 16:20:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:41.389 16:20:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.389 16:20:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.389 ************************************ 00:12:41.389 START TEST raid_write_error_test 00:12:41.389 ************************************ 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IgKztAWDf4 00:12:41.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71581 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71581 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71581 ']' 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.389 16:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.389 [2024-10-08 16:20:34.624133] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:41.389 [2024-10-08 16:20:34.625038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71581 ] 00:12:41.648 [2024-10-08 16:20:34.789298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.971 [2024-10-08 16:20:35.032504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.971 [2024-10-08 16:20:35.237452] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.971 [2024-10-08 16:20:35.237712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 BaseBdev1_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 true 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 [2024-10-08 16:20:35.703335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:42.538 [2024-10-08 16:20:35.703426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.538 [2024-10-08 16:20:35.703453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:42.538 [2024-10-08 16:20:35.703473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.538 [2024-10-08 16:20:35.706324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.538 [2024-10-08 16:20:35.706376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.538 BaseBdev1 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 BaseBdev2_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 true 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 [2024-10-08 16:20:35.771906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:42.538 [2024-10-08 16:20:35.771998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.538 [2024-10-08 16:20:35.772026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:42.538 [2024-10-08 16:20:35.772045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.538 [2024-10-08 16:20:35.774873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.538 [2024-10-08 16:20:35.774927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.538 BaseBdev2 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 BaseBdev3_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 true 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 [2024-10-08 16:20:35.827664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:42.538 [2024-10-08 16:20:35.827748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.538 [2024-10-08 16:20:35.827775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:42.538 [2024-10-08 16:20:35.827791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.538 [2024-10-08 16:20:35.830541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.538 [2024-10-08 16:20:35.830596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:42.538 BaseBdev3 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 BaseBdev4_malloc 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 true 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 [2024-10-08 16:20:35.888510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:42.796 [2024-10-08 16:20:35.888640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.796 [2024-10-08 16:20:35.888667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:42.796 [2024-10-08 16:20:35.888686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.796 [2024-10-08 16:20:35.891479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.796 [2024-10-08 16:20:35.891589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:42.796 BaseBdev4 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 [2024-10-08 16:20:35.896626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.796 [2024-10-08 16:20:35.899261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.796 [2024-10-08 16:20:35.899503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.796 [2024-10-08 16:20:35.899751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:42.796 [2024-10-08 16:20:35.900162] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:42.796 [2024-10-08 16:20:35.900323] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:42.796 [2024-10-08 16:20:35.900723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:42.796 [2024-10-08 16:20:35.901072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:42.796 [2024-10-08 16:20:35.901205] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:42.796 [2024-10-08 16:20:35.901638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.796 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.797 "name": "raid_bdev1", 00:12:42.797 "uuid": "a20490ff-c6a4-4f63-a241-decb7f7bc194", 00:12:42.797 "strip_size_kb": 64, 00:12:42.797 "state": "online", 00:12:42.797 "raid_level": "raid0", 00:12:42.797 "superblock": true, 00:12:42.797 "num_base_bdevs": 4, 00:12:42.797 "num_base_bdevs_discovered": 4, 00:12:42.797 "num_base_bdevs_operational": 4, 00:12:42.797 "base_bdevs_list": [ 00:12:42.797 { 00:12:42.797 "name": "BaseBdev1", 00:12:42.797 "uuid": "9fa94acc-58ec-52c1-a134-f712eaa6e75d", 00:12:42.797 "is_configured": true, 00:12:42.797 "data_offset": 2048, 00:12:42.797 "data_size": 63488 00:12:42.797 }, 00:12:42.797 { 00:12:42.797 "name": "BaseBdev2", 00:12:42.797 "uuid": "e0babdca-ec29-5d25-90a4-db007ebe2645", 00:12:42.797 "is_configured": true, 00:12:42.797 "data_offset": 2048, 00:12:42.797 "data_size": 63488 00:12:42.797 }, 00:12:42.797 { 00:12:42.797 "name": "BaseBdev3", 00:12:42.797 "uuid": "6c0518a2-84fc-5431-9e9d-6d3caf64c12f", 00:12:42.797 "is_configured": true, 00:12:42.797 "data_offset": 2048, 00:12:42.797 "data_size": 63488 00:12:42.797 }, 00:12:42.797 { 00:12:42.797 "name": "BaseBdev4", 00:12:42.797 "uuid": "9de2b965-99ea-5a06-8058-371dac0eb9ca", 00:12:42.797 "is_configured": true, 00:12:42.797 "data_offset": 2048, 00:12:42.797 "data_size": 63488 00:12:42.797 } 00:12:42.797 ] 00:12:42.797 }' 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.797 16:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.363 16:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:43.363 16:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:43.363 [2024-10-08 16:20:36.543085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.298 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.299 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.299 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.299 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.299 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.299 "name": "raid_bdev1", 00:12:44.299 "uuid": "a20490ff-c6a4-4f63-a241-decb7f7bc194", 00:12:44.299 "strip_size_kb": 64, 00:12:44.299 "state": "online", 00:12:44.299 "raid_level": "raid0", 00:12:44.299 "superblock": true, 00:12:44.299 "num_base_bdevs": 4, 00:12:44.299 "num_base_bdevs_discovered": 4, 00:12:44.299 "num_base_bdevs_operational": 4, 00:12:44.299 "base_bdevs_list": [ 00:12:44.299 { 00:12:44.299 "name": "BaseBdev1", 00:12:44.299 "uuid": "9fa94acc-58ec-52c1-a134-f712eaa6e75d", 00:12:44.299 "is_configured": true, 00:12:44.299 "data_offset": 2048, 00:12:44.299 "data_size": 63488 00:12:44.299 }, 00:12:44.299 { 00:12:44.299 "name": "BaseBdev2", 00:12:44.299 "uuid": "e0babdca-ec29-5d25-90a4-db007ebe2645", 00:12:44.299 "is_configured": true, 00:12:44.299 "data_offset": 2048, 00:12:44.299 "data_size": 63488 00:12:44.299 }, 00:12:44.299 { 00:12:44.299 "name": "BaseBdev3", 00:12:44.299 "uuid": "6c0518a2-84fc-5431-9e9d-6d3caf64c12f", 00:12:44.299 "is_configured": true, 00:12:44.299 "data_offset": 2048, 00:12:44.299 "data_size": 63488 00:12:44.299 }, 00:12:44.299 { 00:12:44.299 "name": "BaseBdev4", 00:12:44.299 "uuid": "9de2b965-99ea-5a06-8058-371dac0eb9ca", 00:12:44.299 "is_configured": true, 00:12:44.299 "data_offset": 2048, 00:12:44.299 "data_size": 63488 00:12:44.299 } 00:12:44.299 ] 00:12:44.299 }' 00:12:44.299 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.299 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.866 [2024-10-08 16:20:37.944480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.866 [2024-10-08 16:20:37.944577] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.866 [2024-10-08 16:20:37.947954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.866 [2024-10-08 16:20:37.948270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.866 [2024-10-08 16:20:37.948349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.866 [2024-10-08 16:20:37.948371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:44.866 { 00:12:44.866 "results": [ 00:12:44.866 { 00:12:44.866 "job": "raid_bdev1", 00:12:44.866 "core_mask": "0x1", 00:12:44.866 "workload": "randrw", 00:12:44.866 "percentage": 50, 00:12:44.866 "status": "finished", 00:12:44.866 "queue_depth": 1, 00:12:44.866 "io_size": 131072, 00:12:44.866 "runtime": 1.398954, 00:12:44.866 "iops": 10784.48612320348, 00:12:44.866 "mibps": 1348.060765400435, 00:12:44.866 "io_failed": 1, 00:12:44.866 "io_timeout": 0, 00:12:44.866 "avg_latency_us": 129.76632411067195, 00:12:44.866 "min_latency_us": 37.93454545454546, 00:12:44.866 "max_latency_us": 1817.1345454545456 00:12:44.866 } 00:12:44.866 ], 00:12:44.866 "core_count": 1 00:12:44.866 } 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71581 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71581 ']' 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71581 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71581 00:12:44.866 killing process with pid 71581 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71581' 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71581 00:12:44.866 [2024-10-08 16:20:37.984516] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.866 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71581 00:12:45.128 [2024-10-08 16:20:38.262737] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IgKztAWDf4 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:46.504 00:12:46.504 real 0m4.984s 00:12:46.504 user 0m6.084s 00:12:46.504 sys 0m0.643s 00:12:46.504 ************************************ 00:12:46.504 END TEST raid_write_error_test 00:12:46.504 ************************************ 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.504 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.504 16:20:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:46.504 16:20:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:46.504 16:20:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:46.504 16:20:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.504 16:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.504 ************************************ 00:12:46.504 START TEST raid_state_function_test 00:12:46.504 ************************************ 00:12:46.504 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:12:46.504 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:46.504 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:46.505 Process raid pid: 71725 00:12:46.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71725 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71725' 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71725 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71725 ']' 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.505 16:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.505 [2024-10-08 16:20:39.678781] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:46.505 [2024-10-08 16:20:39.679171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.763 [2024-10-08 16:20:39.856825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.021 [2024-10-08 16:20:40.100728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.021 [2024-10-08 16:20:40.305318] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.021 [2024-10-08 16:20:40.305645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.628 [2024-10-08 16:20:40.680397] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.628 [2024-10-08 16:20:40.680487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.628 [2024-10-08 16:20:40.680504] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.628 [2024-10-08 16:20:40.680573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.628 [2024-10-08 16:20:40.680589] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.628 [2024-10-08 16:20:40.680607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.628 [2024-10-08 16:20:40.680617] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:47.628 [2024-10-08 16:20:40.680632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.628 "name": "Existed_Raid", 00:12:47.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.628 "strip_size_kb": 64, 00:12:47.628 "state": "configuring", 00:12:47.628 "raid_level": "concat", 00:12:47.628 "superblock": false, 00:12:47.628 "num_base_bdevs": 4, 00:12:47.628 "num_base_bdevs_discovered": 0, 00:12:47.628 "num_base_bdevs_operational": 4, 00:12:47.628 "base_bdevs_list": [ 00:12:47.628 { 00:12:47.628 "name": "BaseBdev1", 00:12:47.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.628 "is_configured": false, 00:12:47.628 "data_offset": 0, 00:12:47.628 "data_size": 0 00:12:47.628 }, 00:12:47.628 { 00:12:47.628 "name": "BaseBdev2", 00:12:47.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.628 "is_configured": false, 00:12:47.628 "data_offset": 0, 00:12:47.628 "data_size": 0 00:12:47.628 }, 00:12:47.628 { 00:12:47.628 "name": "BaseBdev3", 00:12:47.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.628 "is_configured": false, 00:12:47.628 "data_offset": 0, 00:12:47.628 "data_size": 0 00:12:47.628 }, 00:12:47.628 { 00:12:47.628 "name": "BaseBdev4", 00:12:47.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.628 "is_configured": false, 00:12:47.628 "data_offset": 0, 00:12:47.628 "data_size": 0 00:12:47.628 } 00:12:47.628 ] 00:12:47.628 }' 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.628 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.886 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.886 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 [2024-10-08 16:20:41.204509] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.886 [2024-10-08 16:20:41.204874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.145 [2024-10-08 16:20:41.212473] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.145 [2024-10-08 16:20:41.212711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.145 [2024-10-08 16:20:41.212826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.145 [2024-10-08 16:20:41.212969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.145 [2024-10-08 16:20:41.213072] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.145 [2024-10-08 16:20:41.213127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.145 [2024-10-08 16:20:41.213319] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:48.145 [2024-10-08 16:20:41.213402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.145 [2024-10-08 16:20:41.263216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.145 BaseBdev1 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.145 [ 00:12:48.145 { 00:12:48.145 "name": "BaseBdev1", 00:12:48.145 "aliases": [ 00:12:48.145 "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f" 00:12:48.145 ], 00:12:48.145 "product_name": "Malloc disk", 00:12:48.145 "block_size": 512, 00:12:48.145 "num_blocks": 65536, 00:12:48.145 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:48.145 "assigned_rate_limits": { 00:12:48.145 "rw_ios_per_sec": 0, 00:12:48.145 "rw_mbytes_per_sec": 0, 00:12:48.145 "r_mbytes_per_sec": 0, 00:12:48.145 "w_mbytes_per_sec": 0 00:12:48.145 }, 00:12:48.145 "claimed": true, 00:12:48.145 "claim_type": "exclusive_write", 00:12:48.145 "zoned": false, 00:12:48.145 "supported_io_types": { 00:12:48.145 "read": true, 00:12:48.145 "write": true, 00:12:48.145 "unmap": true, 00:12:48.145 "flush": true, 00:12:48.145 "reset": true, 00:12:48.145 "nvme_admin": false, 00:12:48.145 "nvme_io": false, 00:12:48.145 "nvme_io_md": false, 00:12:48.145 "write_zeroes": true, 00:12:48.145 "zcopy": true, 00:12:48.145 "get_zone_info": false, 00:12:48.145 "zone_management": false, 00:12:48.145 "zone_append": false, 00:12:48.145 "compare": false, 00:12:48.145 "compare_and_write": false, 00:12:48.145 "abort": true, 00:12:48.145 "seek_hole": false, 00:12:48.145 "seek_data": false, 00:12:48.145 "copy": true, 00:12:48.145 "nvme_iov_md": false 00:12:48.145 }, 00:12:48.145 "memory_domains": [ 00:12:48.145 { 00:12:48.145 "dma_device_id": "system", 00:12:48.145 "dma_device_type": 1 00:12:48.145 }, 00:12:48.145 { 00:12:48.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.145 "dma_device_type": 2 00:12:48.145 } 00:12:48.145 ], 00:12:48.145 "driver_specific": {} 00:12:48.145 } 00:12:48.145 ] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.145 "name": "Existed_Raid", 00:12:48.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.145 "strip_size_kb": 64, 00:12:48.145 "state": "configuring", 00:12:48.145 "raid_level": "concat", 00:12:48.145 "superblock": false, 00:12:48.145 "num_base_bdevs": 4, 00:12:48.145 "num_base_bdevs_discovered": 1, 00:12:48.145 "num_base_bdevs_operational": 4, 00:12:48.145 "base_bdevs_list": [ 00:12:48.145 { 00:12:48.145 "name": "BaseBdev1", 00:12:48.145 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:48.145 "is_configured": true, 00:12:48.145 "data_offset": 0, 00:12:48.145 "data_size": 65536 00:12:48.145 }, 00:12:48.145 { 00:12:48.145 "name": "BaseBdev2", 00:12:48.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.145 "is_configured": false, 00:12:48.145 "data_offset": 0, 00:12:48.145 "data_size": 0 00:12:48.145 }, 00:12:48.145 { 00:12:48.145 "name": "BaseBdev3", 00:12:48.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.145 "is_configured": false, 00:12:48.145 "data_offset": 0, 00:12:48.145 "data_size": 0 00:12:48.145 }, 00:12:48.145 { 00:12:48.145 "name": "BaseBdev4", 00:12:48.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.145 "is_configured": false, 00:12:48.145 "data_offset": 0, 00:12:48.145 "data_size": 0 00:12:48.145 } 00:12:48.145 ] 00:12:48.145 }' 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.145 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.713 [2024-10-08 16:20:41.867433] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.713 [2024-10-08 16:20:41.867542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.713 [2024-10-08 16:20:41.875464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.713 [2024-10-08 16:20:41.878498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.713 [2024-10-08 16:20:41.878598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.713 [2024-10-08 16:20:41.878615] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.713 [2024-10-08 16:20:41.878632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.713 [2024-10-08 16:20:41.878642] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:48.713 [2024-10-08 16:20:41.878656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.713 "name": "Existed_Raid", 00:12:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.713 "strip_size_kb": 64, 00:12:48.713 "state": "configuring", 00:12:48.713 "raid_level": "concat", 00:12:48.713 "superblock": false, 00:12:48.713 "num_base_bdevs": 4, 00:12:48.713 "num_base_bdevs_discovered": 1, 00:12:48.713 "num_base_bdevs_operational": 4, 00:12:48.713 "base_bdevs_list": [ 00:12:48.713 { 00:12:48.713 "name": "BaseBdev1", 00:12:48.713 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:48.713 "is_configured": true, 00:12:48.713 "data_offset": 0, 00:12:48.713 "data_size": 65536 00:12:48.713 }, 00:12:48.713 { 00:12:48.713 "name": "BaseBdev2", 00:12:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.713 "is_configured": false, 00:12:48.713 "data_offset": 0, 00:12:48.713 "data_size": 0 00:12:48.713 }, 00:12:48.713 { 00:12:48.713 "name": "BaseBdev3", 00:12:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.713 "is_configured": false, 00:12:48.713 "data_offset": 0, 00:12:48.713 "data_size": 0 00:12:48.713 }, 00:12:48.713 { 00:12:48.713 "name": "BaseBdev4", 00:12:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.713 "is_configured": false, 00:12:48.713 "data_offset": 0, 00:12:48.713 "data_size": 0 00:12:48.713 } 00:12:48.713 ] 00:12:48.713 }' 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.713 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.280 BaseBdev2 00:12:49.280 [2024-10-08 16:20:42.430008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.280 [ 00:12:49.280 { 00:12:49.280 "name": "BaseBdev2", 00:12:49.280 "aliases": [ 00:12:49.280 "42dd2097-38bc-4699-b37f-0099d6cc6717" 00:12:49.280 ], 00:12:49.280 "product_name": "Malloc disk", 00:12:49.280 "block_size": 512, 00:12:49.280 "num_blocks": 65536, 00:12:49.280 "uuid": "42dd2097-38bc-4699-b37f-0099d6cc6717", 00:12:49.280 "assigned_rate_limits": { 00:12:49.280 "rw_ios_per_sec": 0, 00:12:49.280 "rw_mbytes_per_sec": 0, 00:12:49.280 "r_mbytes_per_sec": 0, 00:12:49.280 "w_mbytes_per_sec": 0 00:12:49.280 }, 00:12:49.280 "claimed": true, 00:12:49.280 "claim_type": "exclusive_write", 00:12:49.280 "zoned": false, 00:12:49.280 "supported_io_types": { 00:12:49.280 "read": true, 00:12:49.280 "write": true, 00:12:49.280 "unmap": true, 00:12:49.280 "flush": true, 00:12:49.280 "reset": true, 00:12:49.280 "nvme_admin": false, 00:12:49.280 "nvme_io": false, 00:12:49.280 "nvme_io_md": false, 00:12:49.280 "write_zeroes": true, 00:12:49.280 "zcopy": true, 00:12:49.280 "get_zone_info": false, 00:12:49.280 "zone_management": false, 00:12:49.280 "zone_append": false, 00:12:49.280 "compare": false, 00:12:49.280 "compare_and_write": false, 00:12:49.280 "abort": true, 00:12:49.280 "seek_hole": false, 00:12:49.280 "seek_data": false, 00:12:49.280 "copy": true, 00:12:49.280 "nvme_iov_md": false 00:12:49.280 }, 00:12:49.280 "memory_domains": [ 00:12:49.280 { 00:12:49.280 "dma_device_id": "system", 00:12:49.280 "dma_device_type": 1 00:12:49.280 }, 00:12:49.280 { 00:12:49.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.280 "dma_device_type": 2 00:12:49.280 } 00:12:49.280 ], 00:12:49.280 "driver_specific": {} 00:12:49.280 } 00:12:49.280 ] 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.280 "name": "Existed_Raid", 00:12:49.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.280 "strip_size_kb": 64, 00:12:49.280 "state": "configuring", 00:12:49.280 "raid_level": "concat", 00:12:49.280 "superblock": false, 00:12:49.280 "num_base_bdevs": 4, 00:12:49.280 "num_base_bdevs_discovered": 2, 00:12:49.280 "num_base_bdevs_operational": 4, 00:12:49.280 "base_bdevs_list": [ 00:12:49.280 { 00:12:49.280 "name": "BaseBdev1", 00:12:49.280 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:49.280 "is_configured": true, 00:12:49.280 "data_offset": 0, 00:12:49.280 "data_size": 65536 00:12:49.280 }, 00:12:49.280 { 00:12:49.280 "name": "BaseBdev2", 00:12:49.280 "uuid": "42dd2097-38bc-4699-b37f-0099d6cc6717", 00:12:49.280 "is_configured": true, 00:12:49.280 "data_offset": 0, 00:12:49.280 "data_size": 65536 00:12:49.280 }, 00:12:49.280 { 00:12:49.280 "name": "BaseBdev3", 00:12:49.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.280 "is_configured": false, 00:12:49.280 "data_offset": 0, 00:12:49.280 "data_size": 0 00:12:49.280 }, 00:12:49.280 { 00:12:49.280 "name": "BaseBdev4", 00:12:49.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.280 "is_configured": false, 00:12:49.280 "data_offset": 0, 00:12:49.280 "data_size": 0 00:12:49.280 } 00:12:49.280 ] 00:12:49.280 }' 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.280 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.870 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.870 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.870 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.870 [2024-10-08 16:20:43.033234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.870 BaseBdev3 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.870 [ 00:12:49.870 { 00:12:49.870 "name": "BaseBdev3", 00:12:49.870 "aliases": [ 00:12:49.870 "d461a2f2-c824-4782-a9d3-8b225881ba7d" 00:12:49.870 ], 00:12:49.870 "product_name": "Malloc disk", 00:12:49.870 "block_size": 512, 00:12:49.870 "num_blocks": 65536, 00:12:49.870 "uuid": "d461a2f2-c824-4782-a9d3-8b225881ba7d", 00:12:49.870 "assigned_rate_limits": { 00:12:49.870 "rw_ios_per_sec": 0, 00:12:49.870 "rw_mbytes_per_sec": 0, 00:12:49.870 "r_mbytes_per_sec": 0, 00:12:49.870 "w_mbytes_per_sec": 0 00:12:49.870 }, 00:12:49.870 "claimed": true, 00:12:49.870 "claim_type": "exclusive_write", 00:12:49.870 "zoned": false, 00:12:49.870 "supported_io_types": { 00:12:49.870 "read": true, 00:12:49.870 "write": true, 00:12:49.870 "unmap": true, 00:12:49.870 "flush": true, 00:12:49.870 "reset": true, 00:12:49.870 "nvme_admin": false, 00:12:49.870 "nvme_io": false, 00:12:49.870 "nvme_io_md": false, 00:12:49.870 "write_zeroes": true, 00:12:49.870 "zcopy": true, 00:12:49.870 "get_zone_info": false, 00:12:49.870 "zone_management": false, 00:12:49.870 "zone_append": false, 00:12:49.870 "compare": false, 00:12:49.870 "compare_and_write": false, 00:12:49.870 "abort": true, 00:12:49.870 "seek_hole": false, 00:12:49.870 "seek_data": false, 00:12:49.870 "copy": true, 00:12:49.870 "nvme_iov_md": false 00:12:49.870 }, 00:12:49.870 "memory_domains": [ 00:12:49.870 { 00:12:49.870 "dma_device_id": "system", 00:12:49.870 "dma_device_type": 1 00:12:49.870 }, 00:12:49.870 { 00:12:49.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.870 "dma_device_type": 2 00:12:49.870 } 00:12:49.870 ], 00:12:49.870 "driver_specific": {} 00:12:49.870 } 00:12:49.870 ] 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.870 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.871 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.871 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.871 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.871 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.871 "name": "Existed_Raid", 00:12:49.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.871 "strip_size_kb": 64, 00:12:49.871 "state": "configuring", 00:12:49.871 "raid_level": "concat", 00:12:49.871 "superblock": false, 00:12:49.871 "num_base_bdevs": 4, 00:12:49.871 "num_base_bdevs_discovered": 3, 00:12:49.871 "num_base_bdevs_operational": 4, 00:12:49.871 "base_bdevs_list": [ 00:12:49.871 { 00:12:49.871 "name": "BaseBdev1", 00:12:49.871 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:49.871 "is_configured": true, 00:12:49.871 "data_offset": 0, 00:12:49.871 "data_size": 65536 00:12:49.871 }, 00:12:49.871 { 00:12:49.871 "name": "BaseBdev2", 00:12:49.871 "uuid": "42dd2097-38bc-4699-b37f-0099d6cc6717", 00:12:49.871 "is_configured": true, 00:12:49.871 "data_offset": 0, 00:12:49.871 "data_size": 65536 00:12:49.871 }, 00:12:49.871 { 00:12:49.871 "name": "BaseBdev3", 00:12:49.871 "uuid": "d461a2f2-c824-4782-a9d3-8b225881ba7d", 00:12:49.871 "is_configured": true, 00:12:49.871 "data_offset": 0, 00:12:49.871 "data_size": 65536 00:12:49.871 }, 00:12:49.871 { 00:12:49.871 "name": "BaseBdev4", 00:12:49.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.871 "is_configured": false, 00:12:49.871 "data_offset": 0, 00:12:49.871 "data_size": 0 00:12:49.871 } 00:12:49.871 ] 00:12:49.871 }' 00:12:49.871 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.871 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.436 [2024-10-08 16:20:43.616801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:50.436 [2024-10-08 16:20:43.616913] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:50.436 [2024-10-08 16:20:43.616926] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:50.436 [2024-10-08 16:20:43.617230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:50.436 [2024-10-08 16:20:43.617491] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:50.436 [2024-10-08 16:20:43.617534] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:50.436 [2024-10-08 16:20:43.617865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.436 BaseBdev4 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.436 [ 00:12:50.436 { 00:12:50.436 "name": "BaseBdev4", 00:12:50.436 "aliases": [ 00:12:50.436 "5b0dd828-11a2-44f4-aadf-711f4d39f92f" 00:12:50.436 ], 00:12:50.436 "product_name": "Malloc disk", 00:12:50.436 "block_size": 512, 00:12:50.436 "num_blocks": 65536, 00:12:50.436 "uuid": "5b0dd828-11a2-44f4-aadf-711f4d39f92f", 00:12:50.436 "assigned_rate_limits": { 00:12:50.436 "rw_ios_per_sec": 0, 00:12:50.436 "rw_mbytes_per_sec": 0, 00:12:50.436 "r_mbytes_per_sec": 0, 00:12:50.436 "w_mbytes_per_sec": 0 00:12:50.436 }, 00:12:50.436 "claimed": true, 00:12:50.436 "claim_type": "exclusive_write", 00:12:50.436 "zoned": false, 00:12:50.436 "supported_io_types": { 00:12:50.436 "read": true, 00:12:50.436 "write": true, 00:12:50.436 "unmap": true, 00:12:50.436 "flush": true, 00:12:50.436 "reset": true, 00:12:50.436 "nvme_admin": false, 00:12:50.436 "nvme_io": false, 00:12:50.436 "nvme_io_md": false, 00:12:50.436 "write_zeroes": true, 00:12:50.436 "zcopy": true, 00:12:50.436 "get_zone_info": false, 00:12:50.436 "zone_management": false, 00:12:50.436 "zone_append": false, 00:12:50.436 "compare": false, 00:12:50.436 "compare_and_write": false, 00:12:50.436 "abort": true, 00:12:50.436 "seek_hole": false, 00:12:50.436 "seek_data": false, 00:12:50.436 "copy": true, 00:12:50.436 "nvme_iov_md": false 00:12:50.436 }, 00:12:50.436 "memory_domains": [ 00:12:50.436 { 00:12:50.436 "dma_device_id": "system", 00:12:50.436 "dma_device_type": 1 00:12:50.436 }, 00:12:50.436 { 00:12:50.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.436 "dma_device_type": 2 00:12:50.436 } 00:12:50.436 ], 00:12:50.436 "driver_specific": {} 00:12:50.436 } 00:12:50.436 ] 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.436 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.437 "name": "Existed_Raid", 00:12:50.437 "uuid": "e647a3dc-0e97-4daa-8a47-65d65a09a04d", 00:12:50.437 "strip_size_kb": 64, 00:12:50.437 "state": "online", 00:12:50.437 "raid_level": "concat", 00:12:50.437 "superblock": false, 00:12:50.437 "num_base_bdevs": 4, 00:12:50.437 "num_base_bdevs_discovered": 4, 00:12:50.437 "num_base_bdevs_operational": 4, 00:12:50.437 "base_bdevs_list": [ 00:12:50.437 { 00:12:50.437 "name": "BaseBdev1", 00:12:50.437 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:50.437 "is_configured": true, 00:12:50.437 "data_offset": 0, 00:12:50.437 "data_size": 65536 00:12:50.437 }, 00:12:50.437 { 00:12:50.437 "name": "BaseBdev2", 00:12:50.437 "uuid": "42dd2097-38bc-4699-b37f-0099d6cc6717", 00:12:50.437 "is_configured": true, 00:12:50.437 "data_offset": 0, 00:12:50.437 "data_size": 65536 00:12:50.437 }, 00:12:50.437 { 00:12:50.437 "name": "BaseBdev3", 00:12:50.437 "uuid": "d461a2f2-c824-4782-a9d3-8b225881ba7d", 00:12:50.437 "is_configured": true, 00:12:50.437 "data_offset": 0, 00:12:50.437 "data_size": 65536 00:12:50.437 }, 00:12:50.437 { 00:12:50.437 "name": "BaseBdev4", 00:12:50.437 "uuid": "5b0dd828-11a2-44f4-aadf-711f4d39f92f", 00:12:50.437 "is_configured": true, 00:12:50.437 "data_offset": 0, 00:12:50.437 "data_size": 65536 00:12:50.437 } 00:12:50.437 ] 00:12:50.437 }' 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.437 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.001 [2024-10-08 16:20:44.169498] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.001 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.001 "name": "Existed_Raid", 00:12:51.001 "aliases": [ 00:12:51.001 "e647a3dc-0e97-4daa-8a47-65d65a09a04d" 00:12:51.001 ], 00:12:51.001 "product_name": "Raid Volume", 00:12:51.001 "block_size": 512, 00:12:51.001 "num_blocks": 262144, 00:12:51.001 "uuid": "e647a3dc-0e97-4daa-8a47-65d65a09a04d", 00:12:51.001 "assigned_rate_limits": { 00:12:51.001 "rw_ios_per_sec": 0, 00:12:51.001 "rw_mbytes_per_sec": 0, 00:12:51.001 "r_mbytes_per_sec": 0, 00:12:51.001 "w_mbytes_per_sec": 0 00:12:51.001 }, 00:12:51.001 "claimed": false, 00:12:51.001 "zoned": false, 00:12:51.001 "supported_io_types": { 00:12:51.001 "read": true, 00:12:51.001 "write": true, 00:12:51.001 "unmap": true, 00:12:51.001 "flush": true, 00:12:51.001 "reset": true, 00:12:51.001 "nvme_admin": false, 00:12:51.001 "nvme_io": false, 00:12:51.001 "nvme_io_md": false, 00:12:51.001 "write_zeroes": true, 00:12:51.001 "zcopy": false, 00:12:51.001 "get_zone_info": false, 00:12:51.001 "zone_management": false, 00:12:51.001 "zone_append": false, 00:12:51.001 "compare": false, 00:12:51.001 "compare_and_write": false, 00:12:51.001 "abort": false, 00:12:51.001 "seek_hole": false, 00:12:51.001 "seek_data": false, 00:12:51.001 "copy": false, 00:12:51.002 "nvme_iov_md": false 00:12:51.002 }, 00:12:51.002 "memory_domains": [ 00:12:51.002 { 00:12:51.002 "dma_device_id": "system", 00:12:51.002 "dma_device_type": 1 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.002 "dma_device_type": 2 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "system", 00:12:51.002 "dma_device_type": 1 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.002 "dma_device_type": 2 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "system", 00:12:51.002 "dma_device_type": 1 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.002 "dma_device_type": 2 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "system", 00:12:51.002 "dma_device_type": 1 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.002 "dma_device_type": 2 00:12:51.002 } 00:12:51.002 ], 00:12:51.002 "driver_specific": { 00:12:51.002 "raid": { 00:12:51.002 "uuid": "e647a3dc-0e97-4daa-8a47-65d65a09a04d", 00:12:51.002 "strip_size_kb": 64, 00:12:51.002 "state": "online", 00:12:51.002 "raid_level": "concat", 00:12:51.002 "superblock": false, 00:12:51.002 "num_base_bdevs": 4, 00:12:51.002 "num_base_bdevs_discovered": 4, 00:12:51.002 "num_base_bdevs_operational": 4, 00:12:51.002 "base_bdevs_list": [ 00:12:51.002 { 00:12:51.002 "name": "BaseBdev1", 00:12:51.002 "uuid": "f8a9d90c-afac-4a3f-a400-ae0e0ffc852f", 00:12:51.002 "is_configured": true, 00:12:51.002 "data_offset": 0, 00:12:51.002 "data_size": 65536 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "name": "BaseBdev2", 00:12:51.002 "uuid": "42dd2097-38bc-4699-b37f-0099d6cc6717", 00:12:51.002 "is_configured": true, 00:12:51.002 "data_offset": 0, 00:12:51.002 "data_size": 65536 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "name": "BaseBdev3", 00:12:51.002 "uuid": "d461a2f2-c824-4782-a9d3-8b225881ba7d", 00:12:51.002 "is_configured": true, 00:12:51.002 "data_offset": 0, 00:12:51.002 "data_size": 65536 00:12:51.002 }, 00:12:51.002 { 00:12:51.002 "name": "BaseBdev4", 00:12:51.002 "uuid": "5b0dd828-11a2-44f4-aadf-711f4d39f92f", 00:12:51.002 "is_configured": true, 00:12:51.002 "data_offset": 0, 00:12:51.002 "data_size": 65536 00:12:51.002 } 00:12:51.002 ] 00:12:51.002 } 00:12:51.002 } 00:12:51.002 }' 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:51.002 BaseBdev2 00:12:51.002 BaseBdev3 00:12:51.002 BaseBdev4' 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.002 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:51.259 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.260 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.260 [2024-10-08 16:20:44.533275] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.260 [2024-10-08 16:20:44.533328] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.260 [2024-10-08 16:20:44.533403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.518 "name": "Existed_Raid", 00:12:51.518 "uuid": "e647a3dc-0e97-4daa-8a47-65d65a09a04d", 00:12:51.518 "strip_size_kb": 64, 00:12:51.518 "state": "offline", 00:12:51.518 "raid_level": "concat", 00:12:51.518 "superblock": false, 00:12:51.518 "num_base_bdevs": 4, 00:12:51.518 "num_base_bdevs_discovered": 3, 00:12:51.518 "num_base_bdevs_operational": 3, 00:12:51.518 "base_bdevs_list": [ 00:12:51.518 { 00:12:51.518 "name": null, 00:12:51.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.518 "is_configured": false, 00:12:51.518 "data_offset": 0, 00:12:51.518 "data_size": 65536 00:12:51.518 }, 00:12:51.518 { 00:12:51.518 "name": "BaseBdev2", 00:12:51.518 "uuid": "42dd2097-38bc-4699-b37f-0099d6cc6717", 00:12:51.518 "is_configured": true, 00:12:51.518 "data_offset": 0, 00:12:51.518 "data_size": 65536 00:12:51.518 }, 00:12:51.518 { 00:12:51.518 "name": "BaseBdev3", 00:12:51.518 "uuid": "d461a2f2-c824-4782-a9d3-8b225881ba7d", 00:12:51.518 "is_configured": true, 00:12:51.518 "data_offset": 0, 00:12:51.518 "data_size": 65536 00:12:51.518 }, 00:12:51.518 { 00:12:51.518 "name": "BaseBdev4", 00:12:51.518 "uuid": "5b0dd828-11a2-44f4-aadf-711f4d39f92f", 00:12:51.518 "is_configured": true, 00:12:51.518 "data_offset": 0, 00:12:51.518 "data_size": 65536 00:12:51.518 } 00:12:51.518 ] 00:12:51.518 }' 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.518 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.084 [2024-10-08 16:20:45.215157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.084 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:52.085 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.085 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:52.085 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.085 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:52.085 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.085 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.085 [2024-10-08 16:20:45.361339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.357 [2024-10-08 16:20:45.510955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:52.357 [2024-10-08 16:20:45.511020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.357 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 BaseBdev2 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 [ 00:12:52.616 { 00:12:52.616 "name": "BaseBdev2", 00:12:52.616 "aliases": [ 00:12:52.616 "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77" 00:12:52.616 ], 00:12:52.616 "product_name": "Malloc disk", 00:12:52.616 "block_size": 512, 00:12:52.616 "num_blocks": 65536, 00:12:52.616 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:52.616 "assigned_rate_limits": { 00:12:52.616 "rw_ios_per_sec": 0, 00:12:52.616 "rw_mbytes_per_sec": 0, 00:12:52.616 "r_mbytes_per_sec": 0, 00:12:52.616 "w_mbytes_per_sec": 0 00:12:52.616 }, 00:12:52.616 "claimed": false, 00:12:52.616 "zoned": false, 00:12:52.616 "supported_io_types": { 00:12:52.616 "read": true, 00:12:52.616 "write": true, 00:12:52.616 "unmap": true, 00:12:52.616 "flush": true, 00:12:52.616 "reset": true, 00:12:52.616 "nvme_admin": false, 00:12:52.616 "nvme_io": false, 00:12:52.616 "nvme_io_md": false, 00:12:52.616 "write_zeroes": true, 00:12:52.616 "zcopy": true, 00:12:52.616 "get_zone_info": false, 00:12:52.616 "zone_management": false, 00:12:52.616 "zone_append": false, 00:12:52.616 "compare": false, 00:12:52.616 "compare_and_write": false, 00:12:52.616 "abort": true, 00:12:52.616 "seek_hole": false, 00:12:52.616 "seek_data": false, 00:12:52.616 "copy": true, 00:12:52.616 "nvme_iov_md": false 00:12:52.616 }, 00:12:52.616 "memory_domains": [ 00:12:52.616 { 00:12:52.616 "dma_device_id": "system", 00:12:52.616 "dma_device_type": 1 00:12:52.616 }, 00:12:52.616 { 00:12:52.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.616 "dma_device_type": 2 00:12:52.616 } 00:12:52.616 ], 00:12:52.616 "driver_specific": {} 00:12:52.616 } 00:12:52.616 ] 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.616 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 BaseBdev3 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 [ 00:12:52.617 { 00:12:52.617 "name": "BaseBdev3", 00:12:52.617 "aliases": [ 00:12:52.617 "138b4f89-e7f0-4bbd-960f-bf987c0c68e7" 00:12:52.617 ], 00:12:52.617 "product_name": "Malloc disk", 00:12:52.617 "block_size": 512, 00:12:52.617 "num_blocks": 65536, 00:12:52.617 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:52.617 "assigned_rate_limits": { 00:12:52.617 "rw_ios_per_sec": 0, 00:12:52.617 "rw_mbytes_per_sec": 0, 00:12:52.617 "r_mbytes_per_sec": 0, 00:12:52.617 "w_mbytes_per_sec": 0 00:12:52.617 }, 00:12:52.617 "claimed": false, 00:12:52.617 "zoned": false, 00:12:52.617 "supported_io_types": { 00:12:52.617 "read": true, 00:12:52.617 "write": true, 00:12:52.617 "unmap": true, 00:12:52.617 "flush": true, 00:12:52.617 "reset": true, 00:12:52.617 "nvme_admin": false, 00:12:52.617 "nvme_io": false, 00:12:52.617 "nvme_io_md": false, 00:12:52.617 "write_zeroes": true, 00:12:52.617 "zcopy": true, 00:12:52.617 "get_zone_info": false, 00:12:52.617 "zone_management": false, 00:12:52.617 "zone_append": false, 00:12:52.617 "compare": false, 00:12:52.617 "compare_and_write": false, 00:12:52.617 "abort": true, 00:12:52.617 "seek_hole": false, 00:12:52.617 "seek_data": false, 00:12:52.617 "copy": true, 00:12:52.617 "nvme_iov_md": false 00:12:52.617 }, 00:12:52.617 "memory_domains": [ 00:12:52.617 { 00:12:52.617 "dma_device_id": "system", 00:12:52.617 "dma_device_type": 1 00:12:52.617 }, 00:12:52.617 { 00:12:52.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.617 "dma_device_type": 2 00:12:52.617 } 00:12:52.617 ], 00:12:52.617 "driver_specific": {} 00:12:52.617 } 00:12:52.617 ] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 BaseBdev4 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 [ 00:12:52.617 { 00:12:52.617 "name": "BaseBdev4", 00:12:52.617 "aliases": [ 00:12:52.617 "1a8e550d-198a-4134-9f8d-1595c5e977c7" 00:12:52.617 ], 00:12:52.617 "product_name": "Malloc disk", 00:12:52.617 "block_size": 512, 00:12:52.617 "num_blocks": 65536, 00:12:52.617 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:52.617 "assigned_rate_limits": { 00:12:52.617 "rw_ios_per_sec": 0, 00:12:52.617 "rw_mbytes_per_sec": 0, 00:12:52.617 "r_mbytes_per_sec": 0, 00:12:52.617 "w_mbytes_per_sec": 0 00:12:52.617 }, 00:12:52.617 "claimed": false, 00:12:52.617 "zoned": false, 00:12:52.617 "supported_io_types": { 00:12:52.617 "read": true, 00:12:52.617 "write": true, 00:12:52.617 "unmap": true, 00:12:52.617 "flush": true, 00:12:52.617 "reset": true, 00:12:52.617 "nvme_admin": false, 00:12:52.617 "nvme_io": false, 00:12:52.617 "nvme_io_md": false, 00:12:52.617 "write_zeroes": true, 00:12:52.617 "zcopy": true, 00:12:52.617 "get_zone_info": false, 00:12:52.617 "zone_management": false, 00:12:52.617 "zone_append": false, 00:12:52.617 "compare": false, 00:12:52.617 "compare_and_write": false, 00:12:52.617 "abort": true, 00:12:52.617 "seek_hole": false, 00:12:52.617 "seek_data": false, 00:12:52.617 "copy": true, 00:12:52.617 "nvme_iov_md": false 00:12:52.617 }, 00:12:52.617 "memory_domains": [ 00:12:52.617 { 00:12:52.617 "dma_device_id": "system", 00:12:52.617 "dma_device_type": 1 00:12:52.617 }, 00:12:52.617 { 00:12:52.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.617 "dma_device_type": 2 00:12:52.617 } 00:12:52.617 ], 00:12:52.617 "driver_specific": {} 00:12:52.617 } 00:12:52.617 ] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 [2024-10-08 16:20:45.870968] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.617 [2024-10-08 16:20:45.871293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.617 [2024-10-08 16:20:45.871365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.617 [2024-10-08 16:20:45.873823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.617 [2024-10-08 16:20:45.873891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.617 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.618 "name": "Existed_Raid", 00:12:52.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.618 "strip_size_kb": 64, 00:12:52.618 "state": "configuring", 00:12:52.618 "raid_level": "concat", 00:12:52.618 "superblock": false, 00:12:52.618 "num_base_bdevs": 4, 00:12:52.618 "num_base_bdevs_discovered": 3, 00:12:52.618 "num_base_bdevs_operational": 4, 00:12:52.618 "base_bdevs_list": [ 00:12:52.618 { 00:12:52.618 "name": "BaseBdev1", 00:12:52.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.618 "is_configured": false, 00:12:52.618 "data_offset": 0, 00:12:52.618 "data_size": 0 00:12:52.618 }, 00:12:52.618 { 00:12:52.618 "name": "BaseBdev2", 00:12:52.618 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:52.618 "is_configured": true, 00:12:52.618 "data_offset": 0, 00:12:52.618 "data_size": 65536 00:12:52.618 }, 00:12:52.618 { 00:12:52.618 "name": "BaseBdev3", 00:12:52.618 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:52.618 "is_configured": true, 00:12:52.618 "data_offset": 0, 00:12:52.618 "data_size": 65536 00:12:52.618 }, 00:12:52.618 { 00:12:52.618 "name": "BaseBdev4", 00:12:52.618 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:52.618 "is_configured": true, 00:12:52.618 "data_offset": 0, 00:12:52.618 "data_size": 65536 00:12:52.618 } 00:12:52.618 ] 00:12:52.618 }' 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.618 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.185 [2024-10-08 16:20:46.419332] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.185 "name": "Existed_Raid", 00:12:53.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.185 "strip_size_kb": 64, 00:12:53.185 "state": "configuring", 00:12:53.185 "raid_level": "concat", 00:12:53.185 "superblock": false, 00:12:53.185 "num_base_bdevs": 4, 00:12:53.185 "num_base_bdevs_discovered": 2, 00:12:53.185 "num_base_bdevs_operational": 4, 00:12:53.185 "base_bdevs_list": [ 00:12:53.185 { 00:12:53.185 "name": "BaseBdev1", 00:12:53.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.185 "is_configured": false, 00:12:53.185 "data_offset": 0, 00:12:53.185 "data_size": 0 00:12:53.185 }, 00:12:53.185 { 00:12:53.185 "name": null, 00:12:53.185 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:53.185 "is_configured": false, 00:12:53.185 "data_offset": 0, 00:12:53.185 "data_size": 65536 00:12:53.185 }, 00:12:53.185 { 00:12:53.185 "name": "BaseBdev3", 00:12:53.185 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:53.185 "is_configured": true, 00:12:53.185 "data_offset": 0, 00:12:53.185 "data_size": 65536 00:12:53.185 }, 00:12:53.185 { 00:12:53.185 "name": "BaseBdev4", 00:12:53.185 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:53.185 "is_configured": true, 00:12:53.185 "data_offset": 0, 00:12:53.185 "data_size": 65536 00:12:53.185 } 00:12:53.185 ] 00:12:53.185 }' 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.185 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.751 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.751 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.751 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.751 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.751 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.751 [2024-10-08 16:20:47.057277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.751 BaseBdev1 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.751 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.011 [ 00:12:54.011 { 00:12:54.011 "name": "BaseBdev1", 00:12:54.011 "aliases": [ 00:12:54.011 "e68ba947-0cbe-449a-82ce-da0aaf5c691c" 00:12:54.011 ], 00:12:54.011 "product_name": "Malloc disk", 00:12:54.011 "block_size": 512, 00:12:54.011 "num_blocks": 65536, 00:12:54.011 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:54.011 "assigned_rate_limits": { 00:12:54.011 "rw_ios_per_sec": 0, 00:12:54.011 "rw_mbytes_per_sec": 0, 00:12:54.011 "r_mbytes_per_sec": 0, 00:12:54.011 "w_mbytes_per_sec": 0 00:12:54.011 }, 00:12:54.011 "claimed": true, 00:12:54.011 "claim_type": "exclusive_write", 00:12:54.011 "zoned": false, 00:12:54.011 "supported_io_types": { 00:12:54.011 "read": true, 00:12:54.011 "write": true, 00:12:54.011 "unmap": true, 00:12:54.011 "flush": true, 00:12:54.011 "reset": true, 00:12:54.011 "nvme_admin": false, 00:12:54.011 "nvme_io": false, 00:12:54.011 "nvme_io_md": false, 00:12:54.011 "write_zeroes": true, 00:12:54.011 "zcopy": true, 00:12:54.011 "get_zone_info": false, 00:12:54.011 "zone_management": false, 00:12:54.011 "zone_append": false, 00:12:54.011 "compare": false, 00:12:54.011 "compare_and_write": false, 00:12:54.011 "abort": true, 00:12:54.011 "seek_hole": false, 00:12:54.011 "seek_data": false, 00:12:54.011 "copy": true, 00:12:54.011 "nvme_iov_md": false 00:12:54.011 }, 00:12:54.011 "memory_domains": [ 00:12:54.011 { 00:12:54.011 "dma_device_id": "system", 00:12:54.011 "dma_device_type": 1 00:12:54.011 }, 00:12:54.011 { 00:12:54.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.011 "dma_device_type": 2 00:12:54.011 } 00:12:54.011 ], 00:12:54.011 "driver_specific": {} 00:12:54.011 } 00:12:54.011 ] 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.011 "name": "Existed_Raid", 00:12:54.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.011 "strip_size_kb": 64, 00:12:54.011 "state": "configuring", 00:12:54.011 "raid_level": "concat", 00:12:54.011 "superblock": false, 00:12:54.011 "num_base_bdevs": 4, 00:12:54.011 "num_base_bdevs_discovered": 3, 00:12:54.011 "num_base_bdevs_operational": 4, 00:12:54.011 "base_bdevs_list": [ 00:12:54.011 { 00:12:54.011 "name": "BaseBdev1", 00:12:54.011 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:54.011 "is_configured": true, 00:12:54.011 "data_offset": 0, 00:12:54.011 "data_size": 65536 00:12:54.011 }, 00:12:54.011 { 00:12:54.011 "name": null, 00:12:54.011 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:54.011 "is_configured": false, 00:12:54.011 "data_offset": 0, 00:12:54.011 "data_size": 65536 00:12:54.011 }, 00:12:54.011 { 00:12:54.011 "name": "BaseBdev3", 00:12:54.011 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:54.011 "is_configured": true, 00:12:54.011 "data_offset": 0, 00:12:54.011 "data_size": 65536 00:12:54.011 }, 00:12:54.011 { 00:12:54.011 "name": "BaseBdev4", 00:12:54.011 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:54.011 "is_configured": true, 00:12:54.011 "data_offset": 0, 00:12:54.011 "data_size": 65536 00:12:54.011 } 00:12:54.011 ] 00:12:54.011 }' 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.011 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.580 [2024-10-08 16:20:47.657536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.580 "name": "Existed_Raid", 00:12:54.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.580 "strip_size_kb": 64, 00:12:54.580 "state": "configuring", 00:12:54.580 "raid_level": "concat", 00:12:54.580 "superblock": false, 00:12:54.580 "num_base_bdevs": 4, 00:12:54.580 "num_base_bdevs_discovered": 2, 00:12:54.580 "num_base_bdevs_operational": 4, 00:12:54.580 "base_bdevs_list": [ 00:12:54.580 { 00:12:54.580 "name": "BaseBdev1", 00:12:54.580 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:54.580 "is_configured": true, 00:12:54.580 "data_offset": 0, 00:12:54.580 "data_size": 65536 00:12:54.580 }, 00:12:54.580 { 00:12:54.580 "name": null, 00:12:54.580 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:54.580 "is_configured": false, 00:12:54.580 "data_offset": 0, 00:12:54.580 "data_size": 65536 00:12:54.580 }, 00:12:54.580 { 00:12:54.580 "name": null, 00:12:54.580 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:54.580 "is_configured": false, 00:12:54.580 "data_offset": 0, 00:12:54.580 "data_size": 65536 00:12:54.580 }, 00:12:54.580 { 00:12:54.580 "name": "BaseBdev4", 00:12:54.580 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:54.580 "is_configured": true, 00:12:54.580 "data_offset": 0, 00:12:54.580 "data_size": 65536 00:12:54.580 } 00:12:54.580 ] 00:12:54.580 }' 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.580 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.147 [2024-10-08 16:20:48.229694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.147 "name": "Existed_Raid", 00:12:55.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.147 "strip_size_kb": 64, 00:12:55.147 "state": "configuring", 00:12:55.147 "raid_level": "concat", 00:12:55.147 "superblock": false, 00:12:55.147 "num_base_bdevs": 4, 00:12:55.147 "num_base_bdevs_discovered": 3, 00:12:55.147 "num_base_bdevs_operational": 4, 00:12:55.147 "base_bdevs_list": [ 00:12:55.147 { 00:12:55.147 "name": "BaseBdev1", 00:12:55.147 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:55.147 "is_configured": true, 00:12:55.147 "data_offset": 0, 00:12:55.147 "data_size": 65536 00:12:55.147 }, 00:12:55.147 { 00:12:55.147 "name": null, 00:12:55.147 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:55.147 "is_configured": false, 00:12:55.147 "data_offset": 0, 00:12:55.147 "data_size": 65536 00:12:55.147 }, 00:12:55.147 { 00:12:55.147 "name": "BaseBdev3", 00:12:55.147 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:55.147 "is_configured": true, 00:12:55.147 "data_offset": 0, 00:12:55.147 "data_size": 65536 00:12:55.147 }, 00:12:55.147 { 00:12:55.147 "name": "BaseBdev4", 00:12:55.147 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:55.147 "is_configured": true, 00:12:55.147 "data_offset": 0, 00:12:55.147 "data_size": 65536 00:12:55.147 } 00:12:55.147 ] 00:12:55.147 }' 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.147 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.713 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.713 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.713 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.713 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.714 [2024-10-08 16:20:48.805990] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.714 "name": "Existed_Raid", 00:12:55.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.714 "strip_size_kb": 64, 00:12:55.714 "state": "configuring", 00:12:55.714 "raid_level": "concat", 00:12:55.714 "superblock": false, 00:12:55.714 "num_base_bdevs": 4, 00:12:55.714 "num_base_bdevs_discovered": 2, 00:12:55.714 "num_base_bdevs_operational": 4, 00:12:55.714 "base_bdevs_list": [ 00:12:55.714 { 00:12:55.714 "name": null, 00:12:55.714 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:55.714 "is_configured": false, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 65536 00:12:55.714 }, 00:12:55.714 { 00:12:55.714 "name": null, 00:12:55.714 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:55.714 "is_configured": false, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 65536 00:12:55.714 }, 00:12:55.714 { 00:12:55.714 "name": "BaseBdev3", 00:12:55.714 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:55.714 "is_configured": true, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 65536 00:12:55.714 }, 00:12:55.714 { 00:12:55.714 "name": "BaseBdev4", 00:12:55.714 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:55.714 "is_configured": true, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 65536 00:12:55.714 } 00:12:55.714 ] 00:12:55.714 }' 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.714 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.279 [2024-10-08 16:20:49.526263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.279 "name": "Existed_Raid", 00:12:56.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.279 "strip_size_kb": 64, 00:12:56.279 "state": "configuring", 00:12:56.279 "raid_level": "concat", 00:12:56.279 "superblock": false, 00:12:56.279 "num_base_bdevs": 4, 00:12:56.279 "num_base_bdevs_discovered": 3, 00:12:56.279 "num_base_bdevs_operational": 4, 00:12:56.279 "base_bdevs_list": [ 00:12:56.279 { 00:12:56.279 "name": null, 00:12:56.279 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:56.279 "is_configured": false, 00:12:56.279 "data_offset": 0, 00:12:56.279 "data_size": 65536 00:12:56.279 }, 00:12:56.279 { 00:12:56.279 "name": "BaseBdev2", 00:12:56.279 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:56.279 "is_configured": true, 00:12:56.279 "data_offset": 0, 00:12:56.279 "data_size": 65536 00:12:56.279 }, 00:12:56.279 { 00:12:56.279 "name": "BaseBdev3", 00:12:56.279 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:56.279 "is_configured": true, 00:12:56.279 "data_offset": 0, 00:12:56.279 "data_size": 65536 00:12:56.279 }, 00:12:56.279 { 00:12:56.279 "name": "BaseBdev4", 00:12:56.279 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:56.279 "is_configured": true, 00:12:56.279 "data_offset": 0, 00:12:56.279 "data_size": 65536 00:12:56.279 } 00:12:56.279 ] 00:12:56.279 }' 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.279 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.846 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.847 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e68ba947-0cbe-449a-82ce-da0aaf5c691c 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 [2024-10-08 16:20:50.210782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:57.105 [2024-10-08 16:20:50.211137] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:57.105 [2024-10-08 16:20:50.211162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:57.105 [2024-10-08 16:20:50.211548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:57.105 [2024-10-08 16:20:50.211774] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:57.105 [2024-10-08 16:20:50.211797] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:57.105 [2024-10-08 16:20:50.212121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.105 NewBaseBdev 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.105 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 [ 00:12:57.106 { 00:12:57.106 "name": "NewBaseBdev", 00:12:57.106 "aliases": [ 00:12:57.106 "e68ba947-0cbe-449a-82ce-da0aaf5c691c" 00:12:57.106 ], 00:12:57.106 "product_name": "Malloc disk", 00:12:57.106 "block_size": 512, 00:12:57.106 "num_blocks": 65536, 00:12:57.106 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:57.106 "assigned_rate_limits": { 00:12:57.106 "rw_ios_per_sec": 0, 00:12:57.106 "rw_mbytes_per_sec": 0, 00:12:57.106 "r_mbytes_per_sec": 0, 00:12:57.106 "w_mbytes_per_sec": 0 00:12:57.106 }, 00:12:57.106 "claimed": true, 00:12:57.106 "claim_type": "exclusive_write", 00:12:57.106 "zoned": false, 00:12:57.106 "supported_io_types": { 00:12:57.106 "read": true, 00:12:57.106 "write": true, 00:12:57.106 "unmap": true, 00:12:57.106 "flush": true, 00:12:57.106 "reset": true, 00:12:57.106 "nvme_admin": false, 00:12:57.106 "nvme_io": false, 00:12:57.106 "nvme_io_md": false, 00:12:57.106 "write_zeroes": true, 00:12:57.106 "zcopy": true, 00:12:57.106 "get_zone_info": false, 00:12:57.106 "zone_management": false, 00:12:57.106 "zone_append": false, 00:12:57.106 "compare": false, 00:12:57.106 "compare_and_write": false, 00:12:57.106 "abort": true, 00:12:57.106 "seek_hole": false, 00:12:57.106 "seek_data": false, 00:12:57.106 "copy": true, 00:12:57.106 "nvme_iov_md": false 00:12:57.106 }, 00:12:57.106 "memory_domains": [ 00:12:57.106 { 00:12:57.106 "dma_device_id": "system", 00:12:57.106 "dma_device_type": 1 00:12:57.106 }, 00:12:57.106 { 00:12:57.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.106 "dma_device_type": 2 00:12:57.106 } 00:12:57.106 ], 00:12:57.106 "driver_specific": {} 00:12:57.106 } 00:12:57.106 ] 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.106 "name": "Existed_Raid", 00:12:57.106 "uuid": "fd68f9a5-883a-4b39-a083-2d00d6ff7e92", 00:12:57.106 "strip_size_kb": 64, 00:12:57.106 "state": "online", 00:12:57.106 "raid_level": "concat", 00:12:57.106 "superblock": false, 00:12:57.106 "num_base_bdevs": 4, 00:12:57.106 "num_base_bdevs_discovered": 4, 00:12:57.106 "num_base_bdevs_operational": 4, 00:12:57.106 "base_bdevs_list": [ 00:12:57.106 { 00:12:57.106 "name": "NewBaseBdev", 00:12:57.106 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:57.106 "is_configured": true, 00:12:57.106 "data_offset": 0, 00:12:57.106 "data_size": 65536 00:12:57.106 }, 00:12:57.106 { 00:12:57.106 "name": "BaseBdev2", 00:12:57.106 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:57.106 "is_configured": true, 00:12:57.106 "data_offset": 0, 00:12:57.106 "data_size": 65536 00:12:57.106 }, 00:12:57.106 { 00:12:57.106 "name": "BaseBdev3", 00:12:57.106 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:57.106 "is_configured": true, 00:12:57.106 "data_offset": 0, 00:12:57.106 "data_size": 65536 00:12:57.106 }, 00:12:57.106 { 00:12:57.106 "name": "BaseBdev4", 00:12:57.106 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:57.106 "is_configured": true, 00:12:57.106 "data_offset": 0, 00:12:57.106 "data_size": 65536 00:12:57.106 } 00:12:57.106 ] 00:12:57.106 }' 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.106 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.697 [2024-10-08 16:20:50.759763] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.697 "name": "Existed_Raid", 00:12:57.697 "aliases": [ 00:12:57.697 "fd68f9a5-883a-4b39-a083-2d00d6ff7e92" 00:12:57.697 ], 00:12:57.697 "product_name": "Raid Volume", 00:12:57.697 "block_size": 512, 00:12:57.697 "num_blocks": 262144, 00:12:57.697 "uuid": "fd68f9a5-883a-4b39-a083-2d00d6ff7e92", 00:12:57.697 "assigned_rate_limits": { 00:12:57.697 "rw_ios_per_sec": 0, 00:12:57.697 "rw_mbytes_per_sec": 0, 00:12:57.697 "r_mbytes_per_sec": 0, 00:12:57.697 "w_mbytes_per_sec": 0 00:12:57.697 }, 00:12:57.697 "claimed": false, 00:12:57.697 "zoned": false, 00:12:57.697 "supported_io_types": { 00:12:57.697 "read": true, 00:12:57.697 "write": true, 00:12:57.697 "unmap": true, 00:12:57.697 "flush": true, 00:12:57.697 "reset": true, 00:12:57.697 "nvme_admin": false, 00:12:57.697 "nvme_io": false, 00:12:57.697 "nvme_io_md": false, 00:12:57.697 "write_zeroes": true, 00:12:57.697 "zcopy": false, 00:12:57.697 "get_zone_info": false, 00:12:57.697 "zone_management": false, 00:12:57.697 "zone_append": false, 00:12:57.697 "compare": false, 00:12:57.697 "compare_and_write": false, 00:12:57.697 "abort": false, 00:12:57.697 "seek_hole": false, 00:12:57.697 "seek_data": false, 00:12:57.697 "copy": false, 00:12:57.697 "nvme_iov_md": false 00:12:57.697 }, 00:12:57.697 "memory_domains": [ 00:12:57.697 { 00:12:57.697 "dma_device_id": "system", 00:12:57.697 "dma_device_type": 1 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.697 "dma_device_type": 2 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "system", 00:12:57.697 "dma_device_type": 1 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.697 "dma_device_type": 2 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "system", 00:12:57.697 "dma_device_type": 1 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.697 "dma_device_type": 2 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "system", 00:12:57.697 "dma_device_type": 1 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.697 "dma_device_type": 2 00:12:57.697 } 00:12:57.697 ], 00:12:57.697 "driver_specific": { 00:12:57.697 "raid": { 00:12:57.697 "uuid": "fd68f9a5-883a-4b39-a083-2d00d6ff7e92", 00:12:57.697 "strip_size_kb": 64, 00:12:57.697 "state": "online", 00:12:57.697 "raid_level": "concat", 00:12:57.697 "superblock": false, 00:12:57.697 "num_base_bdevs": 4, 00:12:57.697 "num_base_bdevs_discovered": 4, 00:12:57.697 "num_base_bdevs_operational": 4, 00:12:57.697 "base_bdevs_list": [ 00:12:57.697 { 00:12:57.697 "name": "NewBaseBdev", 00:12:57.697 "uuid": "e68ba947-0cbe-449a-82ce-da0aaf5c691c", 00:12:57.697 "is_configured": true, 00:12:57.697 "data_offset": 0, 00:12:57.697 "data_size": 65536 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "name": "BaseBdev2", 00:12:57.697 "uuid": "ce1f11d3-6557-4cf1-aef2-7c9552d5fe77", 00:12:57.697 "is_configured": true, 00:12:57.697 "data_offset": 0, 00:12:57.697 "data_size": 65536 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "name": "BaseBdev3", 00:12:57.697 "uuid": "138b4f89-e7f0-4bbd-960f-bf987c0c68e7", 00:12:57.697 "is_configured": true, 00:12:57.697 "data_offset": 0, 00:12:57.697 "data_size": 65536 00:12:57.697 }, 00:12:57.697 { 00:12:57.697 "name": "BaseBdev4", 00:12:57.697 "uuid": "1a8e550d-198a-4134-9f8d-1595c5e977c7", 00:12:57.697 "is_configured": true, 00:12:57.697 "data_offset": 0, 00:12:57.697 "data_size": 65536 00:12:57.697 } 00:12:57.697 ] 00:12:57.697 } 00:12:57.697 } 00:12:57.697 }' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:57.697 BaseBdev2 00:12:57.697 BaseBdev3 00:12:57.697 BaseBdev4' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.697 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.698 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.698 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.698 16:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.698 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.698 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.698 16:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.956 [2024-10-08 16:20:51.159260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:57.956 [2024-10-08 16:20:51.159440] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.956 [2024-10-08 16:20:51.159585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.956 [2024-10-08 16:20:51.159680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.956 [2024-10-08 16:20:51.159698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71725 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71725 ']' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71725 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71725 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71725' 00:12:57.956 killing process with pid 71725 00:12:57.956 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71725 00:12:57.957 [2024-10-08 16:20:51.205239] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.957 16:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71725 00:12:58.524 [2024-10-08 16:20:51.554164] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.475 16:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:59.475 00:12:59.475 real 0m13.209s 00:12:59.475 user 0m21.721s 00:12:59.475 sys 0m1.958s 00:12:59.475 16:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.475 16:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.475 ************************************ 00:12:59.475 END TEST raid_state_function_test 00:12:59.475 ************************************ 00:12:59.733 16:20:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:59.733 16:20:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:59.733 16:20:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.733 16:20:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.733 ************************************ 00:12:59.733 START TEST raid_state_function_test_sb 00:12:59.733 ************************************ 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72413 00:12:59.733 Process raid pid: 72413 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72413' 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72413 00:12:59.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72413 ']' 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.733 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.733 [2024-10-08 16:20:52.953131] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:59.733 [2024-10-08 16:20:52.953329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.991 [2024-10-08 16:20:53.132976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.250 [2024-10-08 16:20:53.372600] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.250 [2024-10-08 16:20:53.571472] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.507 [2024-10-08 16:20:53.571911] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.765 [2024-10-08 16:20:53.906736] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.765 [2024-10-08 16:20:53.906804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.765 [2024-10-08 16:20:53.906821] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.765 [2024-10-08 16:20:53.906840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.765 [2024-10-08 16:20:53.906851] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.765 [2024-10-08 16:20:53.906865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.765 [2024-10-08 16:20:53.906875] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:00.765 [2024-10-08 16:20:53.906889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.765 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.765 "name": "Existed_Raid", 00:13:00.765 "uuid": "d05424fb-d8b3-4bd6-870c-b649ae6676c9", 00:13:00.765 "strip_size_kb": 64, 00:13:00.765 "state": "configuring", 00:13:00.765 "raid_level": "concat", 00:13:00.765 "superblock": true, 00:13:00.765 "num_base_bdevs": 4, 00:13:00.765 "num_base_bdevs_discovered": 0, 00:13:00.765 "num_base_bdevs_operational": 4, 00:13:00.765 "base_bdevs_list": [ 00:13:00.765 { 00:13:00.766 "name": "BaseBdev1", 00:13:00.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.766 "is_configured": false, 00:13:00.766 "data_offset": 0, 00:13:00.766 "data_size": 0 00:13:00.766 }, 00:13:00.766 { 00:13:00.766 "name": "BaseBdev2", 00:13:00.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.766 "is_configured": false, 00:13:00.766 "data_offset": 0, 00:13:00.766 "data_size": 0 00:13:00.766 }, 00:13:00.766 { 00:13:00.766 "name": "BaseBdev3", 00:13:00.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.766 "is_configured": false, 00:13:00.766 "data_offset": 0, 00:13:00.766 "data_size": 0 00:13:00.766 }, 00:13:00.766 { 00:13:00.766 "name": "BaseBdev4", 00:13:00.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.766 "is_configured": false, 00:13:00.766 "data_offset": 0, 00:13:00.766 "data_size": 0 00:13:00.766 } 00:13:00.766 ] 00:13:00.766 }' 00:13:00.766 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.766 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.332 [2024-10-08 16:20:54.438749] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.332 [2024-10-08 16:20:54.438800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.332 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.333 [2024-10-08 16:20:54.446776] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.333 [2024-10-08 16:20:54.446828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.333 [2024-10-08 16:20:54.446844] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.333 [2024-10-08 16:20:54.446860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.333 [2024-10-08 16:20:54.446870] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.333 [2024-10-08 16:20:54.446884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.333 [2024-10-08 16:20:54.446894] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.333 [2024-10-08 16:20:54.446908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.333 [2024-10-08 16:20:54.503997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.333 BaseBdev1 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.333 [ 00:13:01.333 { 00:13:01.333 "name": "BaseBdev1", 00:13:01.333 "aliases": [ 00:13:01.333 "b0280dc3-6309-439a-8098-50e8cbc2504a" 00:13:01.333 ], 00:13:01.333 "product_name": "Malloc disk", 00:13:01.333 "block_size": 512, 00:13:01.333 "num_blocks": 65536, 00:13:01.333 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:01.333 "assigned_rate_limits": { 00:13:01.333 "rw_ios_per_sec": 0, 00:13:01.333 "rw_mbytes_per_sec": 0, 00:13:01.333 "r_mbytes_per_sec": 0, 00:13:01.333 "w_mbytes_per_sec": 0 00:13:01.333 }, 00:13:01.333 "claimed": true, 00:13:01.333 "claim_type": "exclusive_write", 00:13:01.333 "zoned": false, 00:13:01.333 "supported_io_types": { 00:13:01.333 "read": true, 00:13:01.333 "write": true, 00:13:01.333 "unmap": true, 00:13:01.333 "flush": true, 00:13:01.333 "reset": true, 00:13:01.333 "nvme_admin": false, 00:13:01.333 "nvme_io": false, 00:13:01.333 "nvme_io_md": false, 00:13:01.333 "write_zeroes": true, 00:13:01.333 "zcopy": true, 00:13:01.333 "get_zone_info": false, 00:13:01.333 "zone_management": false, 00:13:01.333 "zone_append": false, 00:13:01.333 "compare": false, 00:13:01.333 "compare_and_write": false, 00:13:01.333 "abort": true, 00:13:01.333 "seek_hole": false, 00:13:01.333 "seek_data": false, 00:13:01.333 "copy": true, 00:13:01.333 "nvme_iov_md": false 00:13:01.333 }, 00:13:01.333 "memory_domains": [ 00:13:01.333 { 00:13:01.333 "dma_device_id": "system", 00:13:01.333 "dma_device_type": 1 00:13:01.333 }, 00:13:01.333 { 00:13:01.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.333 "dma_device_type": 2 00:13:01.333 } 00:13:01.333 ], 00:13:01.333 "driver_specific": {} 00:13:01.333 } 00:13:01.333 ] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.333 "name": "Existed_Raid", 00:13:01.333 "uuid": "2680d3b4-d234-492e-a500-8cb01bcfcdf5", 00:13:01.333 "strip_size_kb": 64, 00:13:01.333 "state": "configuring", 00:13:01.333 "raid_level": "concat", 00:13:01.333 "superblock": true, 00:13:01.333 "num_base_bdevs": 4, 00:13:01.333 "num_base_bdevs_discovered": 1, 00:13:01.333 "num_base_bdevs_operational": 4, 00:13:01.333 "base_bdevs_list": [ 00:13:01.333 { 00:13:01.333 "name": "BaseBdev1", 00:13:01.333 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:01.333 "is_configured": true, 00:13:01.333 "data_offset": 2048, 00:13:01.333 "data_size": 63488 00:13:01.333 }, 00:13:01.333 { 00:13:01.333 "name": "BaseBdev2", 00:13:01.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.333 "is_configured": false, 00:13:01.333 "data_offset": 0, 00:13:01.333 "data_size": 0 00:13:01.333 }, 00:13:01.333 { 00:13:01.333 "name": "BaseBdev3", 00:13:01.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.333 "is_configured": false, 00:13:01.333 "data_offset": 0, 00:13:01.333 "data_size": 0 00:13:01.333 }, 00:13:01.333 { 00:13:01.333 "name": "BaseBdev4", 00:13:01.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.333 "is_configured": false, 00:13:01.333 "data_offset": 0, 00:13:01.333 "data_size": 0 00:13:01.333 } 00:13:01.333 ] 00:13:01.333 }' 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.333 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.917 [2024-10-08 16:20:55.072235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.917 [2024-10-08 16:20:55.072316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.917 [2024-10-08 16:20:55.084309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.917 [2024-10-08 16:20:55.086877] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.917 [2024-10-08 16:20:55.086936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.917 [2024-10-08 16:20:55.086954] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.917 [2024-10-08 16:20:55.086973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.917 [2024-10-08 16:20:55.086983] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.917 [2024-10-08 16:20:55.086997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.917 "name": "Existed_Raid", 00:13:01.917 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:01.917 "strip_size_kb": 64, 00:13:01.917 "state": "configuring", 00:13:01.917 "raid_level": "concat", 00:13:01.917 "superblock": true, 00:13:01.917 "num_base_bdevs": 4, 00:13:01.917 "num_base_bdevs_discovered": 1, 00:13:01.917 "num_base_bdevs_operational": 4, 00:13:01.917 "base_bdevs_list": [ 00:13:01.917 { 00:13:01.917 "name": "BaseBdev1", 00:13:01.917 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:01.917 "is_configured": true, 00:13:01.917 "data_offset": 2048, 00:13:01.917 "data_size": 63488 00:13:01.917 }, 00:13:01.917 { 00:13:01.917 "name": "BaseBdev2", 00:13:01.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.917 "is_configured": false, 00:13:01.917 "data_offset": 0, 00:13:01.917 "data_size": 0 00:13:01.917 }, 00:13:01.917 { 00:13:01.917 "name": "BaseBdev3", 00:13:01.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.917 "is_configured": false, 00:13:01.917 "data_offset": 0, 00:13:01.917 "data_size": 0 00:13:01.917 }, 00:13:01.917 { 00:13:01.917 "name": "BaseBdev4", 00:13:01.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.917 "is_configured": false, 00:13:01.917 "data_offset": 0, 00:13:01.917 "data_size": 0 00:13:01.917 } 00:13:01.917 ] 00:13:01.917 }' 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.917 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.483 [2024-10-08 16:20:55.677759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.483 BaseBdev2 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.483 [ 00:13:02.483 { 00:13:02.483 "name": "BaseBdev2", 00:13:02.483 "aliases": [ 00:13:02.483 "9e818a5c-c57b-4e3f-bad5-292da5e45acf" 00:13:02.483 ], 00:13:02.483 "product_name": "Malloc disk", 00:13:02.483 "block_size": 512, 00:13:02.483 "num_blocks": 65536, 00:13:02.483 "uuid": "9e818a5c-c57b-4e3f-bad5-292da5e45acf", 00:13:02.483 "assigned_rate_limits": { 00:13:02.483 "rw_ios_per_sec": 0, 00:13:02.483 "rw_mbytes_per_sec": 0, 00:13:02.483 "r_mbytes_per_sec": 0, 00:13:02.483 "w_mbytes_per_sec": 0 00:13:02.483 }, 00:13:02.483 "claimed": true, 00:13:02.483 "claim_type": "exclusive_write", 00:13:02.483 "zoned": false, 00:13:02.483 "supported_io_types": { 00:13:02.483 "read": true, 00:13:02.483 "write": true, 00:13:02.483 "unmap": true, 00:13:02.483 "flush": true, 00:13:02.483 "reset": true, 00:13:02.483 "nvme_admin": false, 00:13:02.483 "nvme_io": false, 00:13:02.483 "nvme_io_md": false, 00:13:02.483 "write_zeroes": true, 00:13:02.483 "zcopy": true, 00:13:02.483 "get_zone_info": false, 00:13:02.483 "zone_management": false, 00:13:02.483 "zone_append": false, 00:13:02.483 "compare": false, 00:13:02.483 "compare_and_write": false, 00:13:02.483 "abort": true, 00:13:02.483 "seek_hole": false, 00:13:02.483 "seek_data": false, 00:13:02.483 "copy": true, 00:13:02.483 "nvme_iov_md": false 00:13:02.483 }, 00:13:02.483 "memory_domains": [ 00:13:02.483 { 00:13:02.483 "dma_device_id": "system", 00:13:02.483 "dma_device_type": 1 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.483 "dma_device_type": 2 00:13:02.483 } 00:13:02.483 ], 00:13:02.483 "driver_specific": {} 00:13:02.483 } 00:13:02.483 ] 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.483 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.484 "name": "Existed_Raid", 00:13:02.484 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:02.484 "strip_size_kb": 64, 00:13:02.484 "state": "configuring", 00:13:02.484 "raid_level": "concat", 00:13:02.484 "superblock": true, 00:13:02.484 "num_base_bdevs": 4, 00:13:02.484 "num_base_bdevs_discovered": 2, 00:13:02.484 "num_base_bdevs_operational": 4, 00:13:02.484 "base_bdevs_list": [ 00:13:02.484 { 00:13:02.484 "name": "BaseBdev1", 00:13:02.484 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:02.484 "is_configured": true, 00:13:02.484 "data_offset": 2048, 00:13:02.484 "data_size": 63488 00:13:02.484 }, 00:13:02.484 { 00:13:02.484 "name": "BaseBdev2", 00:13:02.484 "uuid": "9e818a5c-c57b-4e3f-bad5-292da5e45acf", 00:13:02.484 "is_configured": true, 00:13:02.484 "data_offset": 2048, 00:13:02.484 "data_size": 63488 00:13:02.484 }, 00:13:02.484 { 00:13:02.484 "name": "BaseBdev3", 00:13:02.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.484 "is_configured": false, 00:13:02.484 "data_offset": 0, 00:13:02.484 "data_size": 0 00:13:02.484 }, 00:13:02.484 { 00:13:02.484 "name": "BaseBdev4", 00:13:02.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.484 "is_configured": false, 00:13:02.484 "data_offset": 0, 00:13:02.484 "data_size": 0 00:13:02.484 } 00:13:02.484 ] 00:13:02.484 }' 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.484 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 [2024-10-08 16:20:56.275764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.051 BaseBdev3 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 [ 00:13:03.051 { 00:13:03.051 "name": "BaseBdev3", 00:13:03.051 "aliases": [ 00:13:03.051 "ff86c5a9-7681-414c-a67e-2b5a9ccf18c1" 00:13:03.051 ], 00:13:03.051 "product_name": "Malloc disk", 00:13:03.051 "block_size": 512, 00:13:03.051 "num_blocks": 65536, 00:13:03.051 "uuid": "ff86c5a9-7681-414c-a67e-2b5a9ccf18c1", 00:13:03.051 "assigned_rate_limits": { 00:13:03.051 "rw_ios_per_sec": 0, 00:13:03.051 "rw_mbytes_per_sec": 0, 00:13:03.051 "r_mbytes_per_sec": 0, 00:13:03.051 "w_mbytes_per_sec": 0 00:13:03.051 }, 00:13:03.051 "claimed": true, 00:13:03.051 "claim_type": "exclusive_write", 00:13:03.051 "zoned": false, 00:13:03.051 "supported_io_types": { 00:13:03.051 "read": true, 00:13:03.051 "write": true, 00:13:03.051 "unmap": true, 00:13:03.051 "flush": true, 00:13:03.051 "reset": true, 00:13:03.051 "nvme_admin": false, 00:13:03.051 "nvme_io": false, 00:13:03.051 "nvme_io_md": false, 00:13:03.051 "write_zeroes": true, 00:13:03.051 "zcopy": true, 00:13:03.051 "get_zone_info": false, 00:13:03.051 "zone_management": false, 00:13:03.051 "zone_append": false, 00:13:03.051 "compare": false, 00:13:03.051 "compare_and_write": false, 00:13:03.051 "abort": true, 00:13:03.051 "seek_hole": false, 00:13:03.051 "seek_data": false, 00:13:03.051 "copy": true, 00:13:03.051 "nvme_iov_md": false 00:13:03.051 }, 00:13:03.051 "memory_domains": [ 00:13:03.051 { 00:13:03.051 "dma_device_id": "system", 00:13:03.051 "dma_device_type": 1 00:13:03.051 }, 00:13:03.051 { 00:13:03.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.051 "dma_device_type": 2 00:13:03.051 } 00:13:03.051 ], 00:13:03.051 "driver_specific": {} 00:13:03.051 } 00:13:03.051 ] 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.051 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.051 "name": "Existed_Raid", 00:13:03.051 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:03.051 "strip_size_kb": 64, 00:13:03.051 "state": "configuring", 00:13:03.051 "raid_level": "concat", 00:13:03.051 "superblock": true, 00:13:03.051 "num_base_bdevs": 4, 00:13:03.052 "num_base_bdevs_discovered": 3, 00:13:03.052 "num_base_bdevs_operational": 4, 00:13:03.052 "base_bdevs_list": [ 00:13:03.052 { 00:13:03.052 "name": "BaseBdev1", 00:13:03.052 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:03.052 "is_configured": true, 00:13:03.052 "data_offset": 2048, 00:13:03.052 "data_size": 63488 00:13:03.052 }, 00:13:03.052 { 00:13:03.052 "name": "BaseBdev2", 00:13:03.052 "uuid": "9e818a5c-c57b-4e3f-bad5-292da5e45acf", 00:13:03.052 "is_configured": true, 00:13:03.052 "data_offset": 2048, 00:13:03.052 "data_size": 63488 00:13:03.052 }, 00:13:03.052 { 00:13:03.052 "name": "BaseBdev3", 00:13:03.052 "uuid": "ff86c5a9-7681-414c-a67e-2b5a9ccf18c1", 00:13:03.052 "is_configured": true, 00:13:03.052 "data_offset": 2048, 00:13:03.052 "data_size": 63488 00:13:03.052 }, 00:13:03.052 { 00:13:03.052 "name": "BaseBdev4", 00:13:03.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.052 "is_configured": false, 00:13:03.052 "data_offset": 0, 00:13:03.052 "data_size": 0 00:13:03.052 } 00:13:03.052 ] 00:13:03.052 }' 00:13:03.052 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.052 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.621 [2024-10-08 16:20:56.871871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.621 [2024-10-08 16:20:56.872295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:03.621 [2024-10-08 16:20:56.872316] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:03.621 [2024-10-08 16:20:56.872685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:03.621 BaseBdev4 00:13:03.621 [2024-10-08 16:20:56.872887] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:03.621 [2024-10-08 16:20:56.872911] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:03.621 [2024-10-08 16:20:56.873082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.621 [ 00:13:03.621 { 00:13:03.621 "name": "BaseBdev4", 00:13:03.621 "aliases": [ 00:13:03.621 "0c91136c-9302-46ca-8704-72a60745ae96" 00:13:03.621 ], 00:13:03.621 "product_name": "Malloc disk", 00:13:03.621 "block_size": 512, 00:13:03.621 "num_blocks": 65536, 00:13:03.621 "uuid": "0c91136c-9302-46ca-8704-72a60745ae96", 00:13:03.621 "assigned_rate_limits": { 00:13:03.621 "rw_ios_per_sec": 0, 00:13:03.621 "rw_mbytes_per_sec": 0, 00:13:03.621 "r_mbytes_per_sec": 0, 00:13:03.621 "w_mbytes_per_sec": 0 00:13:03.621 }, 00:13:03.621 "claimed": true, 00:13:03.621 "claim_type": "exclusive_write", 00:13:03.621 "zoned": false, 00:13:03.621 "supported_io_types": { 00:13:03.621 "read": true, 00:13:03.621 "write": true, 00:13:03.621 "unmap": true, 00:13:03.621 "flush": true, 00:13:03.621 "reset": true, 00:13:03.621 "nvme_admin": false, 00:13:03.621 "nvme_io": false, 00:13:03.621 "nvme_io_md": false, 00:13:03.621 "write_zeroes": true, 00:13:03.621 "zcopy": true, 00:13:03.621 "get_zone_info": false, 00:13:03.621 "zone_management": false, 00:13:03.621 "zone_append": false, 00:13:03.621 "compare": false, 00:13:03.621 "compare_and_write": false, 00:13:03.621 "abort": true, 00:13:03.621 "seek_hole": false, 00:13:03.621 "seek_data": false, 00:13:03.621 "copy": true, 00:13:03.621 "nvme_iov_md": false 00:13:03.621 }, 00:13:03.621 "memory_domains": [ 00:13:03.621 { 00:13:03.621 "dma_device_id": "system", 00:13:03.621 "dma_device_type": 1 00:13:03.621 }, 00:13:03.621 { 00:13:03.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.621 "dma_device_type": 2 00:13:03.621 } 00:13:03.621 ], 00:13:03.621 "driver_specific": {} 00:13:03.621 } 00:13:03.621 ] 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.621 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.880 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.880 "name": "Existed_Raid", 00:13:03.880 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:03.880 "strip_size_kb": 64, 00:13:03.880 "state": "online", 00:13:03.880 "raid_level": "concat", 00:13:03.880 "superblock": true, 00:13:03.880 "num_base_bdevs": 4, 00:13:03.880 "num_base_bdevs_discovered": 4, 00:13:03.880 "num_base_bdevs_operational": 4, 00:13:03.880 "base_bdevs_list": [ 00:13:03.880 { 00:13:03.880 "name": "BaseBdev1", 00:13:03.880 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 2048, 00:13:03.880 "data_size": 63488 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "name": "BaseBdev2", 00:13:03.880 "uuid": "9e818a5c-c57b-4e3f-bad5-292da5e45acf", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 2048, 00:13:03.880 "data_size": 63488 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "name": "BaseBdev3", 00:13:03.880 "uuid": "ff86c5a9-7681-414c-a67e-2b5a9ccf18c1", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 2048, 00:13:03.880 "data_size": 63488 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "name": "BaseBdev4", 00:13:03.880 "uuid": "0c91136c-9302-46ca-8704-72a60745ae96", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 2048, 00:13:03.880 "data_size": 63488 00:13:03.880 } 00:13:03.880 ] 00:13:03.880 }' 00:13:03.880 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.880 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.139 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.139 [2024-10-08 16:20:57.440482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.413 "name": "Existed_Raid", 00:13:04.413 "aliases": [ 00:13:04.413 "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb" 00:13:04.413 ], 00:13:04.413 "product_name": "Raid Volume", 00:13:04.413 "block_size": 512, 00:13:04.413 "num_blocks": 253952, 00:13:04.413 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:04.413 "assigned_rate_limits": { 00:13:04.413 "rw_ios_per_sec": 0, 00:13:04.413 "rw_mbytes_per_sec": 0, 00:13:04.413 "r_mbytes_per_sec": 0, 00:13:04.413 "w_mbytes_per_sec": 0 00:13:04.413 }, 00:13:04.413 "claimed": false, 00:13:04.413 "zoned": false, 00:13:04.413 "supported_io_types": { 00:13:04.413 "read": true, 00:13:04.413 "write": true, 00:13:04.413 "unmap": true, 00:13:04.413 "flush": true, 00:13:04.413 "reset": true, 00:13:04.413 "nvme_admin": false, 00:13:04.413 "nvme_io": false, 00:13:04.413 "nvme_io_md": false, 00:13:04.413 "write_zeroes": true, 00:13:04.413 "zcopy": false, 00:13:04.413 "get_zone_info": false, 00:13:04.413 "zone_management": false, 00:13:04.413 "zone_append": false, 00:13:04.413 "compare": false, 00:13:04.413 "compare_and_write": false, 00:13:04.413 "abort": false, 00:13:04.413 "seek_hole": false, 00:13:04.413 "seek_data": false, 00:13:04.413 "copy": false, 00:13:04.413 "nvme_iov_md": false 00:13:04.413 }, 00:13:04.413 "memory_domains": [ 00:13:04.413 { 00:13:04.413 "dma_device_id": "system", 00:13:04.413 "dma_device_type": 1 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.413 "dma_device_type": 2 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "system", 00:13:04.413 "dma_device_type": 1 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.413 "dma_device_type": 2 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "system", 00:13:04.413 "dma_device_type": 1 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.413 "dma_device_type": 2 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "system", 00:13:04.413 "dma_device_type": 1 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.413 "dma_device_type": 2 00:13:04.413 } 00:13:04.413 ], 00:13:04.413 "driver_specific": { 00:13:04.413 "raid": { 00:13:04.413 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:04.413 "strip_size_kb": 64, 00:13:04.413 "state": "online", 00:13:04.413 "raid_level": "concat", 00:13:04.413 "superblock": true, 00:13:04.413 "num_base_bdevs": 4, 00:13:04.413 "num_base_bdevs_discovered": 4, 00:13:04.413 "num_base_bdevs_operational": 4, 00:13:04.413 "base_bdevs_list": [ 00:13:04.413 { 00:13:04.413 "name": "BaseBdev1", 00:13:04.413 "uuid": "b0280dc3-6309-439a-8098-50e8cbc2504a", 00:13:04.413 "is_configured": true, 00:13:04.413 "data_offset": 2048, 00:13:04.413 "data_size": 63488 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "name": "BaseBdev2", 00:13:04.413 "uuid": "9e818a5c-c57b-4e3f-bad5-292da5e45acf", 00:13:04.413 "is_configured": true, 00:13:04.413 "data_offset": 2048, 00:13:04.413 "data_size": 63488 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "name": "BaseBdev3", 00:13:04.413 "uuid": "ff86c5a9-7681-414c-a67e-2b5a9ccf18c1", 00:13:04.413 "is_configured": true, 00:13:04.413 "data_offset": 2048, 00:13:04.413 "data_size": 63488 00:13:04.413 }, 00:13:04.413 { 00:13:04.413 "name": "BaseBdev4", 00:13:04.413 "uuid": "0c91136c-9302-46ca-8704-72a60745ae96", 00:13:04.413 "is_configured": true, 00:13:04.413 "data_offset": 2048, 00:13:04.413 "data_size": 63488 00:13:04.413 } 00:13:04.413 ] 00:13:04.413 } 00:13:04.413 } 00:13:04.413 }' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:04.413 BaseBdev2 00:13:04.413 BaseBdev3 00:13:04.413 BaseBdev4' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.413 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.679 [2024-10-08 16:20:57.816407] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.679 [2024-10-08 16:20:57.816447] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.679 [2024-10-08 16:20:57.816548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:04.679 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.680 "name": "Existed_Raid", 00:13:04.680 "uuid": "0163b9f8-78b2-43d9-9bc0-75e8f067a9fb", 00:13:04.680 "strip_size_kb": 64, 00:13:04.680 "state": "offline", 00:13:04.680 "raid_level": "concat", 00:13:04.680 "superblock": true, 00:13:04.680 "num_base_bdevs": 4, 00:13:04.680 "num_base_bdevs_discovered": 3, 00:13:04.680 "num_base_bdevs_operational": 3, 00:13:04.680 "base_bdevs_list": [ 00:13:04.680 { 00:13:04.680 "name": null, 00:13:04.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.680 "is_configured": false, 00:13:04.680 "data_offset": 0, 00:13:04.680 "data_size": 63488 00:13:04.680 }, 00:13:04.680 { 00:13:04.680 "name": "BaseBdev2", 00:13:04.680 "uuid": "9e818a5c-c57b-4e3f-bad5-292da5e45acf", 00:13:04.680 "is_configured": true, 00:13:04.680 "data_offset": 2048, 00:13:04.680 "data_size": 63488 00:13:04.680 }, 00:13:04.680 { 00:13:04.680 "name": "BaseBdev3", 00:13:04.680 "uuid": "ff86c5a9-7681-414c-a67e-2b5a9ccf18c1", 00:13:04.680 "is_configured": true, 00:13:04.680 "data_offset": 2048, 00:13:04.680 "data_size": 63488 00:13:04.680 }, 00:13:04.680 { 00:13:04.680 "name": "BaseBdev4", 00:13:04.680 "uuid": "0c91136c-9302-46ca-8704-72a60745ae96", 00:13:04.680 "is_configured": true, 00:13:04.680 "data_offset": 2048, 00:13:04.680 "data_size": 63488 00:13:04.680 } 00:13:04.680 ] 00:13:04.680 }' 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.680 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.249 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.249 [2024-10-08 16:20:58.495420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.510 [2024-10-08 16:20:58.640879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.510 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.510 [2024-10-08 16:20:58.782337] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:05.510 [2024-10-08 16:20:58.782417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 BaseBdev2 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 [ 00:13:05.769 { 00:13:05.769 "name": "BaseBdev2", 00:13:05.769 "aliases": [ 00:13:05.769 "3e0cfa22-130e-47b6-a824-b5fc1f206652" 00:13:05.769 ], 00:13:05.769 "product_name": "Malloc disk", 00:13:05.769 "block_size": 512, 00:13:05.769 "num_blocks": 65536, 00:13:05.769 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:05.769 "assigned_rate_limits": { 00:13:05.769 "rw_ios_per_sec": 0, 00:13:05.769 "rw_mbytes_per_sec": 0, 00:13:05.769 "r_mbytes_per_sec": 0, 00:13:05.769 "w_mbytes_per_sec": 0 00:13:05.769 }, 00:13:05.769 "claimed": false, 00:13:05.769 "zoned": false, 00:13:05.769 "supported_io_types": { 00:13:05.769 "read": true, 00:13:05.769 "write": true, 00:13:05.769 "unmap": true, 00:13:05.769 "flush": true, 00:13:05.769 "reset": true, 00:13:05.769 "nvme_admin": false, 00:13:05.769 "nvme_io": false, 00:13:05.769 "nvme_io_md": false, 00:13:05.769 "write_zeroes": true, 00:13:05.769 "zcopy": true, 00:13:05.769 "get_zone_info": false, 00:13:05.769 "zone_management": false, 00:13:05.769 "zone_append": false, 00:13:05.769 "compare": false, 00:13:05.769 "compare_and_write": false, 00:13:05.769 "abort": true, 00:13:05.769 "seek_hole": false, 00:13:05.769 "seek_data": false, 00:13:05.769 "copy": true, 00:13:05.769 "nvme_iov_md": false 00:13:05.769 }, 00:13:05.769 "memory_domains": [ 00:13:05.769 { 00:13:05.769 "dma_device_id": "system", 00:13:05.769 "dma_device_type": 1 00:13:05.769 }, 00:13:05.769 { 00:13:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.769 "dma_device_type": 2 00:13:05.769 } 00:13:05.769 ], 00:13:05.769 "driver_specific": {} 00:13:05.769 } 00:13:05.769 ] 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 BaseBdev3 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.769 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.769 [ 00:13:05.769 { 00:13:05.769 "name": "BaseBdev3", 00:13:05.770 "aliases": [ 00:13:05.770 "f0148b97-a29e-4884-afbe-26ecf50a3bc3" 00:13:05.770 ], 00:13:05.770 "product_name": "Malloc disk", 00:13:05.770 "block_size": 512, 00:13:05.770 "num_blocks": 65536, 00:13:05.770 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:05.770 "assigned_rate_limits": { 00:13:05.770 "rw_ios_per_sec": 0, 00:13:05.770 "rw_mbytes_per_sec": 0, 00:13:05.770 "r_mbytes_per_sec": 0, 00:13:05.770 "w_mbytes_per_sec": 0 00:13:05.770 }, 00:13:05.770 "claimed": false, 00:13:05.770 "zoned": false, 00:13:05.770 "supported_io_types": { 00:13:05.770 "read": true, 00:13:05.770 "write": true, 00:13:05.770 "unmap": true, 00:13:05.770 "flush": true, 00:13:05.770 "reset": true, 00:13:05.770 "nvme_admin": false, 00:13:05.770 "nvme_io": false, 00:13:05.770 "nvme_io_md": false, 00:13:05.770 "write_zeroes": true, 00:13:05.770 "zcopy": true, 00:13:05.770 "get_zone_info": false, 00:13:05.770 "zone_management": false, 00:13:05.770 "zone_append": false, 00:13:05.770 "compare": false, 00:13:05.770 "compare_and_write": false, 00:13:05.770 "abort": true, 00:13:05.770 "seek_hole": false, 00:13:05.770 "seek_data": false, 00:13:05.770 "copy": true, 00:13:05.770 "nvme_iov_md": false 00:13:05.770 }, 00:13:05.770 "memory_domains": [ 00:13:05.770 { 00:13:05.770 "dma_device_id": "system", 00:13:05.770 "dma_device_type": 1 00:13:05.770 }, 00:13:05.770 { 00:13:05.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.770 "dma_device_type": 2 00:13:05.770 } 00:13:05.770 ], 00:13:05.770 "driver_specific": {} 00:13:05.770 } 00:13:05.770 ] 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.770 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.028 BaseBdev4 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.028 [ 00:13:06.028 { 00:13:06.028 "name": "BaseBdev4", 00:13:06.028 "aliases": [ 00:13:06.028 "590194a4-18f7-4e20-bb72-dd8cb367e3e9" 00:13:06.028 ], 00:13:06.028 "product_name": "Malloc disk", 00:13:06.028 "block_size": 512, 00:13:06.028 "num_blocks": 65536, 00:13:06.028 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:06.028 "assigned_rate_limits": { 00:13:06.028 "rw_ios_per_sec": 0, 00:13:06.028 "rw_mbytes_per_sec": 0, 00:13:06.028 "r_mbytes_per_sec": 0, 00:13:06.028 "w_mbytes_per_sec": 0 00:13:06.028 }, 00:13:06.028 "claimed": false, 00:13:06.028 "zoned": false, 00:13:06.028 "supported_io_types": { 00:13:06.028 "read": true, 00:13:06.028 "write": true, 00:13:06.028 "unmap": true, 00:13:06.028 "flush": true, 00:13:06.028 "reset": true, 00:13:06.028 "nvme_admin": false, 00:13:06.028 "nvme_io": false, 00:13:06.028 "nvme_io_md": false, 00:13:06.028 "write_zeroes": true, 00:13:06.028 "zcopy": true, 00:13:06.028 "get_zone_info": false, 00:13:06.028 "zone_management": false, 00:13:06.028 "zone_append": false, 00:13:06.028 "compare": false, 00:13:06.028 "compare_and_write": false, 00:13:06.028 "abort": true, 00:13:06.028 "seek_hole": false, 00:13:06.028 "seek_data": false, 00:13:06.028 "copy": true, 00:13:06.028 "nvme_iov_md": false 00:13:06.028 }, 00:13:06.028 "memory_domains": [ 00:13:06.028 { 00:13:06.028 "dma_device_id": "system", 00:13:06.028 "dma_device_type": 1 00:13:06.028 }, 00:13:06.028 { 00:13:06.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.028 "dma_device_type": 2 00:13:06.028 } 00:13:06.028 ], 00:13:06.028 "driver_specific": {} 00:13:06.028 } 00:13:06.028 ] 00:13:06.028 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.029 [2024-10-08 16:20:59.160164] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.029 [2024-10-08 16:20:59.160250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.029 [2024-10-08 16:20:59.160313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.029 [2024-10-08 16:20:59.162924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.029 [2024-10-08 16:20:59.163016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.029 "name": "Existed_Raid", 00:13:06.029 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:06.029 "strip_size_kb": 64, 00:13:06.029 "state": "configuring", 00:13:06.029 "raid_level": "concat", 00:13:06.029 "superblock": true, 00:13:06.029 "num_base_bdevs": 4, 00:13:06.029 "num_base_bdevs_discovered": 3, 00:13:06.029 "num_base_bdevs_operational": 4, 00:13:06.029 "base_bdevs_list": [ 00:13:06.029 { 00:13:06.029 "name": "BaseBdev1", 00:13:06.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.029 "is_configured": false, 00:13:06.029 "data_offset": 0, 00:13:06.029 "data_size": 0 00:13:06.029 }, 00:13:06.029 { 00:13:06.029 "name": "BaseBdev2", 00:13:06.029 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:06.029 "is_configured": true, 00:13:06.029 "data_offset": 2048, 00:13:06.029 "data_size": 63488 00:13:06.029 }, 00:13:06.029 { 00:13:06.029 "name": "BaseBdev3", 00:13:06.029 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:06.029 "is_configured": true, 00:13:06.029 "data_offset": 2048, 00:13:06.029 "data_size": 63488 00:13:06.029 }, 00:13:06.029 { 00:13:06.029 "name": "BaseBdev4", 00:13:06.029 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:06.029 "is_configured": true, 00:13:06.029 "data_offset": 2048, 00:13:06.029 "data_size": 63488 00:13:06.029 } 00:13:06.029 ] 00:13:06.029 }' 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.029 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.597 [2024-10-08 16:20:59.696414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.597 "name": "Existed_Raid", 00:13:06.597 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:06.597 "strip_size_kb": 64, 00:13:06.597 "state": "configuring", 00:13:06.597 "raid_level": "concat", 00:13:06.597 "superblock": true, 00:13:06.597 "num_base_bdevs": 4, 00:13:06.597 "num_base_bdevs_discovered": 2, 00:13:06.597 "num_base_bdevs_operational": 4, 00:13:06.597 "base_bdevs_list": [ 00:13:06.597 { 00:13:06.597 "name": "BaseBdev1", 00:13:06.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.597 "is_configured": false, 00:13:06.597 "data_offset": 0, 00:13:06.597 "data_size": 0 00:13:06.597 }, 00:13:06.597 { 00:13:06.597 "name": null, 00:13:06.597 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:06.597 "is_configured": false, 00:13:06.597 "data_offset": 0, 00:13:06.597 "data_size": 63488 00:13:06.597 }, 00:13:06.597 { 00:13:06.597 "name": "BaseBdev3", 00:13:06.597 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:06.597 "is_configured": true, 00:13:06.597 "data_offset": 2048, 00:13:06.597 "data_size": 63488 00:13:06.597 }, 00:13:06.597 { 00:13:06.597 "name": "BaseBdev4", 00:13:06.597 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:06.597 "is_configured": true, 00:13:06.597 "data_offset": 2048, 00:13:06.597 "data_size": 63488 00:13:06.597 } 00:13:06.597 ] 00:13:06.597 }' 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.597 16:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 [2024-10-08 16:21:00.335222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.177 BaseBdev1 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 [ 00:13:07.177 { 00:13:07.177 "name": "BaseBdev1", 00:13:07.177 "aliases": [ 00:13:07.177 "4f0fad6f-b56c-4cf9-a84f-ded8a6040897" 00:13:07.177 ], 00:13:07.177 "product_name": "Malloc disk", 00:13:07.177 "block_size": 512, 00:13:07.177 "num_blocks": 65536, 00:13:07.177 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:07.177 "assigned_rate_limits": { 00:13:07.177 "rw_ios_per_sec": 0, 00:13:07.177 "rw_mbytes_per_sec": 0, 00:13:07.177 "r_mbytes_per_sec": 0, 00:13:07.177 "w_mbytes_per_sec": 0 00:13:07.177 }, 00:13:07.177 "claimed": true, 00:13:07.177 "claim_type": "exclusive_write", 00:13:07.177 "zoned": false, 00:13:07.177 "supported_io_types": { 00:13:07.177 "read": true, 00:13:07.177 "write": true, 00:13:07.177 "unmap": true, 00:13:07.177 "flush": true, 00:13:07.177 "reset": true, 00:13:07.177 "nvme_admin": false, 00:13:07.177 "nvme_io": false, 00:13:07.177 "nvme_io_md": false, 00:13:07.177 "write_zeroes": true, 00:13:07.177 "zcopy": true, 00:13:07.177 "get_zone_info": false, 00:13:07.177 "zone_management": false, 00:13:07.177 "zone_append": false, 00:13:07.177 "compare": false, 00:13:07.177 "compare_and_write": false, 00:13:07.177 "abort": true, 00:13:07.177 "seek_hole": false, 00:13:07.177 "seek_data": false, 00:13:07.177 "copy": true, 00:13:07.177 "nvme_iov_md": false 00:13:07.177 }, 00:13:07.177 "memory_domains": [ 00:13:07.177 { 00:13:07.177 "dma_device_id": "system", 00:13:07.177 "dma_device_type": 1 00:13:07.177 }, 00:13:07.177 { 00:13:07.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.177 "dma_device_type": 2 00:13:07.177 } 00:13:07.177 ], 00:13:07.177 "driver_specific": {} 00:13:07.177 } 00:13:07.177 ] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.177 "name": "Existed_Raid", 00:13:07.177 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:07.177 "strip_size_kb": 64, 00:13:07.177 "state": "configuring", 00:13:07.177 "raid_level": "concat", 00:13:07.177 "superblock": true, 00:13:07.177 "num_base_bdevs": 4, 00:13:07.177 "num_base_bdevs_discovered": 3, 00:13:07.177 "num_base_bdevs_operational": 4, 00:13:07.177 "base_bdevs_list": [ 00:13:07.177 { 00:13:07.177 "name": "BaseBdev1", 00:13:07.177 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:07.177 "is_configured": true, 00:13:07.177 "data_offset": 2048, 00:13:07.177 "data_size": 63488 00:13:07.177 }, 00:13:07.177 { 00:13:07.177 "name": null, 00:13:07.177 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:07.177 "is_configured": false, 00:13:07.177 "data_offset": 0, 00:13:07.177 "data_size": 63488 00:13:07.177 }, 00:13:07.177 { 00:13:07.177 "name": "BaseBdev3", 00:13:07.177 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:07.177 "is_configured": true, 00:13:07.177 "data_offset": 2048, 00:13:07.177 "data_size": 63488 00:13:07.177 }, 00:13:07.177 { 00:13:07.177 "name": "BaseBdev4", 00:13:07.177 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:07.177 "is_configured": true, 00:13:07.177 "data_offset": 2048, 00:13:07.177 "data_size": 63488 00:13:07.177 } 00:13:07.177 ] 00:13:07.177 }' 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.177 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.756 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.756 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.756 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.756 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:07.756 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.757 [2024-10-08 16:21:00.939470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.757 "name": "Existed_Raid", 00:13:07.757 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:07.757 "strip_size_kb": 64, 00:13:07.757 "state": "configuring", 00:13:07.757 "raid_level": "concat", 00:13:07.757 "superblock": true, 00:13:07.757 "num_base_bdevs": 4, 00:13:07.757 "num_base_bdevs_discovered": 2, 00:13:07.757 "num_base_bdevs_operational": 4, 00:13:07.757 "base_bdevs_list": [ 00:13:07.757 { 00:13:07.757 "name": "BaseBdev1", 00:13:07.757 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:07.757 "is_configured": true, 00:13:07.757 "data_offset": 2048, 00:13:07.757 "data_size": 63488 00:13:07.757 }, 00:13:07.757 { 00:13:07.757 "name": null, 00:13:07.757 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:07.757 "is_configured": false, 00:13:07.757 "data_offset": 0, 00:13:07.757 "data_size": 63488 00:13:07.757 }, 00:13:07.757 { 00:13:07.757 "name": null, 00:13:07.757 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:07.757 "is_configured": false, 00:13:07.757 "data_offset": 0, 00:13:07.757 "data_size": 63488 00:13:07.757 }, 00:13:07.757 { 00:13:07.757 "name": "BaseBdev4", 00:13:07.757 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:07.757 "is_configured": true, 00:13:07.757 "data_offset": 2048, 00:13:07.757 "data_size": 63488 00:13:07.757 } 00:13:07.757 ] 00:13:07.757 }' 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.757 16:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.324 [2024-10-08 16:21:01.535668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.324 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.324 "name": "Existed_Raid", 00:13:08.324 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:08.324 "strip_size_kb": 64, 00:13:08.324 "state": "configuring", 00:13:08.324 "raid_level": "concat", 00:13:08.324 "superblock": true, 00:13:08.324 "num_base_bdevs": 4, 00:13:08.324 "num_base_bdevs_discovered": 3, 00:13:08.324 "num_base_bdevs_operational": 4, 00:13:08.324 "base_bdevs_list": [ 00:13:08.324 { 00:13:08.324 "name": "BaseBdev1", 00:13:08.324 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:08.324 "is_configured": true, 00:13:08.324 "data_offset": 2048, 00:13:08.324 "data_size": 63488 00:13:08.324 }, 00:13:08.324 { 00:13:08.324 "name": null, 00:13:08.324 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:08.324 "is_configured": false, 00:13:08.324 "data_offset": 0, 00:13:08.324 "data_size": 63488 00:13:08.324 }, 00:13:08.325 { 00:13:08.325 "name": "BaseBdev3", 00:13:08.325 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:08.325 "is_configured": true, 00:13:08.325 "data_offset": 2048, 00:13:08.325 "data_size": 63488 00:13:08.325 }, 00:13:08.325 { 00:13:08.325 "name": "BaseBdev4", 00:13:08.325 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:08.325 "is_configured": true, 00:13:08.325 "data_offset": 2048, 00:13:08.325 "data_size": 63488 00:13:08.325 } 00:13:08.325 ] 00:13:08.325 }' 00:13:08.325 16:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.325 16:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.892 [2024-10-08 16:21:02.111834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.892 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.151 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.151 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.151 "name": "Existed_Raid", 00:13:09.151 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:09.151 "strip_size_kb": 64, 00:13:09.151 "state": "configuring", 00:13:09.151 "raid_level": "concat", 00:13:09.151 "superblock": true, 00:13:09.151 "num_base_bdevs": 4, 00:13:09.151 "num_base_bdevs_discovered": 2, 00:13:09.151 "num_base_bdevs_operational": 4, 00:13:09.151 "base_bdevs_list": [ 00:13:09.151 { 00:13:09.151 "name": null, 00:13:09.151 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:09.151 "is_configured": false, 00:13:09.151 "data_offset": 0, 00:13:09.151 "data_size": 63488 00:13:09.151 }, 00:13:09.151 { 00:13:09.151 "name": null, 00:13:09.151 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:09.151 "is_configured": false, 00:13:09.151 "data_offset": 0, 00:13:09.151 "data_size": 63488 00:13:09.151 }, 00:13:09.151 { 00:13:09.151 "name": "BaseBdev3", 00:13:09.151 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:09.151 "is_configured": true, 00:13:09.151 "data_offset": 2048, 00:13:09.151 "data_size": 63488 00:13:09.151 }, 00:13:09.151 { 00:13:09.151 "name": "BaseBdev4", 00:13:09.151 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:09.151 "is_configured": true, 00:13:09.151 "data_offset": 2048, 00:13:09.151 "data_size": 63488 00:13:09.151 } 00:13:09.151 ] 00:13:09.151 }' 00:13:09.151 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.151 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.410 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.410 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.410 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.410 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:09.410 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.679 [2024-10-08 16:21:02.778073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.679 "name": "Existed_Raid", 00:13:09.679 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:09.679 "strip_size_kb": 64, 00:13:09.679 "state": "configuring", 00:13:09.679 "raid_level": "concat", 00:13:09.679 "superblock": true, 00:13:09.679 "num_base_bdevs": 4, 00:13:09.679 "num_base_bdevs_discovered": 3, 00:13:09.679 "num_base_bdevs_operational": 4, 00:13:09.679 "base_bdevs_list": [ 00:13:09.679 { 00:13:09.679 "name": null, 00:13:09.679 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:09.679 "is_configured": false, 00:13:09.679 "data_offset": 0, 00:13:09.679 "data_size": 63488 00:13:09.679 }, 00:13:09.679 { 00:13:09.679 "name": "BaseBdev2", 00:13:09.679 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:09.679 "is_configured": true, 00:13:09.679 "data_offset": 2048, 00:13:09.679 "data_size": 63488 00:13:09.679 }, 00:13:09.679 { 00:13:09.679 "name": "BaseBdev3", 00:13:09.679 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:09.679 "is_configured": true, 00:13:09.679 "data_offset": 2048, 00:13:09.679 "data_size": 63488 00:13:09.679 }, 00:13:09.679 { 00:13:09.679 "name": "BaseBdev4", 00:13:09.679 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:09.679 "is_configured": true, 00:13:09.679 "data_offset": 2048, 00:13:09.679 "data_size": 63488 00:13:09.679 } 00:13:09.679 ] 00:13:09.679 }' 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.679 16:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.292 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f0fad6f-b56c-4cf9-a84f-ded8a6040897 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 [2024-10-08 16:21:03.424901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:10.293 [2024-10-08 16:21:03.425242] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:10.293 [2024-10-08 16:21:03.425261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:10.293 NewBaseBdev 00:13:10.293 [2024-10-08 16:21:03.425604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:10.293 [2024-10-08 16:21:03.425787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:10.293 [2024-10-08 16:21:03.425809] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:10.293 [2024-10-08 16:21:03.425960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 [ 00:13:10.293 { 00:13:10.293 "name": "NewBaseBdev", 00:13:10.293 "aliases": [ 00:13:10.293 "4f0fad6f-b56c-4cf9-a84f-ded8a6040897" 00:13:10.293 ], 00:13:10.293 "product_name": "Malloc disk", 00:13:10.293 "block_size": 512, 00:13:10.293 "num_blocks": 65536, 00:13:10.293 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:10.293 "assigned_rate_limits": { 00:13:10.293 "rw_ios_per_sec": 0, 00:13:10.293 "rw_mbytes_per_sec": 0, 00:13:10.293 "r_mbytes_per_sec": 0, 00:13:10.293 "w_mbytes_per_sec": 0 00:13:10.293 }, 00:13:10.293 "claimed": true, 00:13:10.293 "claim_type": "exclusive_write", 00:13:10.293 "zoned": false, 00:13:10.293 "supported_io_types": { 00:13:10.293 "read": true, 00:13:10.293 "write": true, 00:13:10.293 "unmap": true, 00:13:10.293 "flush": true, 00:13:10.293 "reset": true, 00:13:10.293 "nvme_admin": false, 00:13:10.293 "nvme_io": false, 00:13:10.293 "nvme_io_md": false, 00:13:10.293 "write_zeroes": true, 00:13:10.293 "zcopy": true, 00:13:10.293 "get_zone_info": false, 00:13:10.293 "zone_management": false, 00:13:10.293 "zone_append": false, 00:13:10.293 "compare": false, 00:13:10.293 "compare_and_write": false, 00:13:10.293 "abort": true, 00:13:10.293 "seek_hole": false, 00:13:10.293 "seek_data": false, 00:13:10.293 "copy": true, 00:13:10.293 "nvme_iov_md": false 00:13:10.293 }, 00:13:10.293 "memory_domains": [ 00:13:10.293 { 00:13:10.293 "dma_device_id": "system", 00:13:10.293 "dma_device_type": 1 00:13:10.293 }, 00:13:10.293 { 00:13:10.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.293 "dma_device_type": 2 00:13:10.293 } 00:13:10.293 ], 00:13:10.293 "driver_specific": {} 00:13:10.293 } 00:13:10.293 ] 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.293 "name": "Existed_Raid", 00:13:10.293 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:10.293 "strip_size_kb": 64, 00:13:10.293 "state": "online", 00:13:10.293 "raid_level": "concat", 00:13:10.293 "superblock": true, 00:13:10.293 "num_base_bdevs": 4, 00:13:10.293 "num_base_bdevs_discovered": 4, 00:13:10.293 "num_base_bdevs_operational": 4, 00:13:10.293 "base_bdevs_list": [ 00:13:10.293 { 00:13:10.293 "name": "NewBaseBdev", 00:13:10.293 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:10.293 "is_configured": true, 00:13:10.293 "data_offset": 2048, 00:13:10.293 "data_size": 63488 00:13:10.293 }, 00:13:10.293 { 00:13:10.293 "name": "BaseBdev2", 00:13:10.293 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:10.293 "is_configured": true, 00:13:10.293 "data_offset": 2048, 00:13:10.293 "data_size": 63488 00:13:10.293 }, 00:13:10.293 { 00:13:10.293 "name": "BaseBdev3", 00:13:10.293 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:10.293 "is_configured": true, 00:13:10.293 "data_offset": 2048, 00:13:10.293 "data_size": 63488 00:13:10.293 }, 00:13:10.293 { 00:13:10.293 "name": "BaseBdev4", 00:13:10.293 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:10.293 "is_configured": true, 00:13:10.293 "data_offset": 2048, 00:13:10.293 "data_size": 63488 00:13:10.293 } 00:13:10.293 ] 00:13:10.293 }' 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.293 16:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:10.860 [2024-10-08 16:21:04.025637] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.860 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:10.860 "name": "Existed_Raid", 00:13:10.860 "aliases": [ 00:13:10.860 "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655" 00:13:10.860 ], 00:13:10.860 "product_name": "Raid Volume", 00:13:10.860 "block_size": 512, 00:13:10.860 "num_blocks": 253952, 00:13:10.860 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:10.860 "assigned_rate_limits": { 00:13:10.860 "rw_ios_per_sec": 0, 00:13:10.860 "rw_mbytes_per_sec": 0, 00:13:10.860 "r_mbytes_per_sec": 0, 00:13:10.860 "w_mbytes_per_sec": 0 00:13:10.860 }, 00:13:10.860 "claimed": false, 00:13:10.860 "zoned": false, 00:13:10.860 "supported_io_types": { 00:13:10.860 "read": true, 00:13:10.860 "write": true, 00:13:10.860 "unmap": true, 00:13:10.860 "flush": true, 00:13:10.860 "reset": true, 00:13:10.860 "nvme_admin": false, 00:13:10.860 "nvme_io": false, 00:13:10.860 "nvme_io_md": false, 00:13:10.860 "write_zeroes": true, 00:13:10.860 "zcopy": false, 00:13:10.860 "get_zone_info": false, 00:13:10.860 "zone_management": false, 00:13:10.860 "zone_append": false, 00:13:10.860 "compare": false, 00:13:10.860 "compare_and_write": false, 00:13:10.860 "abort": false, 00:13:10.860 "seek_hole": false, 00:13:10.860 "seek_data": false, 00:13:10.860 "copy": false, 00:13:10.860 "nvme_iov_md": false 00:13:10.860 }, 00:13:10.860 "memory_domains": [ 00:13:10.860 { 00:13:10.860 "dma_device_id": "system", 00:13:10.860 "dma_device_type": 1 00:13:10.860 }, 00:13:10.860 { 00:13:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.860 "dma_device_type": 2 00:13:10.860 }, 00:13:10.860 { 00:13:10.860 "dma_device_id": "system", 00:13:10.860 "dma_device_type": 1 00:13:10.860 }, 00:13:10.860 { 00:13:10.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.861 "dma_device_type": 2 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "dma_device_id": "system", 00:13:10.861 "dma_device_type": 1 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.861 "dma_device_type": 2 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "dma_device_id": "system", 00:13:10.861 "dma_device_type": 1 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.861 "dma_device_type": 2 00:13:10.861 } 00:13:10.861 ], 00:13:10.861 "driver_specific": { 00:13:10.861 "raid": { 00:13:10.861 "uuid": "4eb94cf1-309a-4e6c-bfe4-bb8beb5f9655", 00:13:10.861 "strip_size_kb": 64, 00:13:10.861 "state": "online", 00:13:10.861 "raid_level": "concat", 00:13:10.861 "superblock": true, 00:13:10.861 "num_base_bdevs": 4, 00:13:10.861 "num_base_bdevs_discovered": 4, 00:13:10.861 "num_base_bdevs_operational": 4, 00:13:10.861 "base_bdevs_list": [ 00:13:10.861 { 00:13:10.861 "name": "NewBaseBdev", 00:13:10.861 "uuid": "4f0fad6f-b56c-4cf9-a84f-ded8a6040897", 00:13:10.861 "is_configured": true, 00:13:10.861 "data_offset": 2048, 00:13:10.861 "data_size": 63488 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "name": "BaseBdev2", 00:13:10.861 "uuid": "3e0cfa22-130e-47b6-a824-b5fc1f206652", 00:13:10.861 "is_configured": true, 00:13:10.861 "data_offset": 2048, 00:13:10.861 "data_size": 63488 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "name": "BaseBdev3", 00:13:10.861 "uuid": "f0148b97-a29e-4884-afbe-26ecf50a3bc3", 00:13:10.861 "is_configured": true, 00:13:10.861 "data_offset": 2048, 00:13:10.861 "data_size": 63488 00:13:10.861 }, 00:13:10.861 { 00:13:10.861 "name": "BaseBdev4", 00:13:10.861 "uuid": "590194a4-18f7-4e20-bb72-dd8cb367e3e9", 00:13:10.861 "is_configured": true, 00:13:10.861 "data_offset": 2048, 00:13:10.861 "data_size": 63488 00:13:10.861 } 00:13:10.861 ] 00:13:10.861 } 00:13:10.861 } 00:13:10.861 }' 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:10.861 BaseBdev2 00:13:10.861 BaseBdev3 00:13:10.861 BaseBdev4' 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.861 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.120 [2024-10-08 16:21:04.389291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.120 [2024-10-08 16:21:04.389445] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.120 [2024-10-08 16:21:04.389730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.120 [2024-10-08 16:21:04.389978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.120 [2024-10-08 16:21:04.390006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72413 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72413 ']' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72413 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72413 00:13:11.120 killing process with pid 72413 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72413' 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72413 00:13:11.120 [2024-10-08 16:21:04.429266] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.120 16:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72413 00:13:11.687 [2024-10-08 16:21:04.785023] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.088 16:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:13.088 00:13:13.088 real 0m13.154s 00:13:13.088 user 0m21.715s 00:13:13.088 sys 0m1.875s 00:13:13.088 16:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.088 16:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.088 ************************************ 00:13:13.088 END TEST raid_state_function_test_sb 00:13:13.088 ************************************ 00:13:13.088 16:21:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:13.088 16:21:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:13.088 16:21:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.088 16:21:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.088 ************************************ 00:13:13.088 START TEST raid_superblock_test 00:13:13.088 ************************************ 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73100 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73100 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73100 ']' 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.088 16:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.088 [2024-10-08 16:21:06.163151] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:13.088 [2024-10-08 16:21:06.163354] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73100 ] 00:13:13.088 [2024-10-08 16:21:06.342769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.347 [2024-10-08 16:21:06.579332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.605 [2024-10-08 16:21:06.784665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.605 [2024-10-08 16:21:06.784722] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.863 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.122 malloc1 00:13:14.122 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.122 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:14.122 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.122 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.122 [2024-10-08 16:21:07.203173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:14.122 [2024-10-08 16:21:07.203300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.122 [2024-10-08 16:21:07.203334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:14.122 [2024-10-08 16:21:07.203351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.123 [2024-10-08 16:21:07.206457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.123 [2024-10-08 16:21:07.206692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:14.123 pt1 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 malloc2 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 [2024-10-08 16:21:07.279176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:14.123 [2024-10-08 16:21:07.279535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.123 [2024-10-08 16:21:07.279617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:14.123 [2024-10-08 16:21:07.279812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.123 [2024-10-08 16:21:07.282742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.123 [2024-10-08 16:21:07.282921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:14.123 pt2 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 malloc3 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 [2024-10-08 16:21:07.335487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:14.123 [2024-10-08 16:21:07.335818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.123 [2024-10-08 16:21:07.335870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:14.123 [2024-10-08 16:21:07.335888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.123 [2024-10-08 16:21:07.338915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.123 [2024-10-08 16:21:07.338949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:14.123 pt3 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 malloc4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 [2024-10-08 16:21:07.389441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:14.123 [2024-10-08 16:21:07.389774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.123 [2024-10-08 16:21:07.389843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:14.123 [2024-10-08 16:21:07.389967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.123 [2024-10-08 16:21:07.392645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.123 [2024-10-08 16:21:07.392690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:14.123 pt4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 [2024-10-08 16:21:07.397570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:14.123 [2024-10-08 16:21:07.400049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:14.123 [2024-10-08 16:21:07.400298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:14.123 [2024-10-08 16:21:07.400540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:14.123 [2024-10-08 16:21:07.400932] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:14.123 [2024-10-08 16:21:07.401093] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:14.123 [2024-10-08 16:21:07.401460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:14.123 [2024-10-08 16:21:07.401705] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:14.123 [2024-10-08 16:21:07.401728] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:14.123 [2024-10-08 16:21:07.401937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.382 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.382 "name": "raid_bdev1", 00:13:14.382 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:14.382 "strip_size_kb": 64, 00:13:14.382 "state": "online", 00:13:14.382 "raid_level": "concat", 00:13:14.382 "superblock": true, 00:13:14.382 "num_base_bdevs": 4, 00:13:14.382 "num_base_bdevs_discovered": 4, 00:13:14.382 "num_base_bdevs_operational": 4, 00:13:14.382 "base_bdevs_list": [ 00:13:14.382 { 00:13:14.382 "name": "pt1", 00:13:14.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.382 "is_configured": true, 00:13:14.382 "data_offset": 2048, 00:13:14.382 "data_size": 63488 00:13:14.382 }, 00:13:14.382 { 00:13:14.382 "name": "pt2", 00:13:14.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.382 "is_configured": true, 00:13:14.382 "data_offset": 2048, 00:13:14.382 "data_size": 63488 00:13:14.382 }, 00:13:14.382 { 00:13:14.382 "name": "pt3", 00:13:14.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.382 "is_configured": true, 00:13:14.382 "data_offset": 2048, 00:13:14.382 "data_size": 63488 00:13:14.382 }, 00:13:14.382 { 00:13:14.382 "name": "pt4", 00:13:14.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:14.382 "is_configured": true, 00:13:14.382 "data_offset": 2048, 00:13:14.382 "data_size": 63488 00:13:14.382 } 00:13:14.382 ] 00:13:14.382 }' 00:13:14.382 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.382 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.641 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.641 [2024-10-08 16:21:07.954485] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.899 16:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.899 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.899 "name": "raid_bdev1", 00:13:14.899 "aliases": [ 00:13:14.899 "ff9b47a2-6bbe-4282-965c-0dfe801ed512" 00:13:14.899 ], 00:13:14.899 "product_name": "Raid Volume", 00:13:14.899 "block_size": 512, 00:13:14.899 "num_blocks": 253952, 00:13:14.899 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:14.899 "assigned_rate_limits": { 00:13:14.899 "rw_ios_per_sec": 0, 00:13:14.899 "rw_mbytes_per_sec": 0, 00:13:14.899 "r_mbytes_per_sec": 0, 00:13:14.899 "w_mbytes_per_sec": 0 00:13:14.899 }, 00:13:14.899 "claimed": false, 00:13:14.899 "zoned": false, 00:13:14.899 "supported_io_types": { 00:13:14.899 "read": true, 00:13:14.899 "write": true, 00:13:14.899 "unmap": true, 00:13:14.899 "flush": true, 00:13:14.899 "reset": true, 00:13:14.899 "nvme_admin": false, 00:13:14.899 "nvme_io": false, 00:13:14.899 "nvme_io_md": false, 00:13:14.899 "write_zeroes": true, 00:13:14.899 "zcopy": false, 00:13:14.899 "get_zone_info": false, 00:13:14.899 "zone_management": false, 00:13:14.899 "zone_append": false, 00:13:14.899 "compare": false, 00:13:14.899 "compare_and_write": false, 00:13:14.899 "abort": false, 00:13:14.899 "seek_hole": false, 00:13:14.899 "seek_data": false, 00:13:14.899 "copy": false, 00:13:14.899 "nvme_iov_md": false 00:13:14.899 }, 00:13:14.899 "memory_domains": [ 00:13:14.899 { 00:13:14.899 "dma_device_id": "system", 00:13:14.899 "dma_device_type": 1 00:13:14.899 }, 00:13:14.899 { 00:13:14.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.899 "dma_device_type": 2 00:13:14.899 }, 00:13:14.899 { 00:13:14.899 "dma_device_id": "system", 00:13:14.899 "dma_device_type": 1 00:13:14.899 }, 00:13:14.899 { 00:13:14.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.899 "dma_device_type": 2 00:13:14.899 }, 00:13:14.899 { 00:13:14.900 "dma_device_id": "system", 00:13:14.900 "dma_device_type": 1 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.900 "dma_device_type": 2 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "dma_device_id": "system", 00:13:14.900 "dma_device_type": 1 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.900 "dma_device_type": 2 00:13:14.900 } 00:13:14.900 ], 00:13:14.900 "driver_specific": { 00:13:14.900 "raid": { 00:13:14.900 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:14.900 "strip_size_kb": 64, 00:13:14.900 "state": "online", 00:13:14.900 "raid_level": "concat", 00:13:14.900 "superblock": true, 00:13:14.900 "num_base_bdevs": 4, 00:13:14.900 "num_base_bdevs_discovered": 4, 00:13:14.900 "num_base_bdevs_operational": 4, 00:13:14.900 "base_bdevs_list": [ 00:13:14.900 { 00:13:14.900 "name": "pt1", 00:13:14.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "name": "pt2", 00:13:14.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "name": "pt3", 00:13:14.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "name": "pt4", 00:13:14.900 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 } 00:13:14.900 ] 00:13:14.900 } 00:13:14.900 } 00:13:14.900 }' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:14.900 pt2 00:13:14.900 pt3 00:13:14.900 pt4' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.900 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.162 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.162 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.162 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.162 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-10-08 16:21:08.338550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ff9b47a2-6bbe-4282-965c-0dfe801ed512 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ff9b47a2-6bbe-4282-965c-0dfe801ed512 ']' 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-10-08 16:21:08.390259] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.163 [2024-10-08 16:21:08.390576] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.163 [2024-10-08 16:21:08.390877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.163 [2024-10-08 16:21:08.391191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.163 [2024-10-08 16:21:08.391429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.163 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.164 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.165 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.475 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.475 [2024-10-08 16:21:08.546265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:15.475 [2024-10-08 16:21:08.548850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:15.475 [2024-10-08 16:21:08.549039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:15.475 [2024-10-08 16:21:08.549108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:15.475 [2024-10-08 16:21:08.549181] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:15.475 [2024-10-08 16:21:08.549258] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:15.475 [2024-10-08 16:21:08.549292] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:15.475 [2024-10-08 16:21:08.549325] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:15.475 [2024-10-08 16:21:08.549349] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.475 [2024-10-08 16:21:08.549365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:15.475 request: 00:13:15.475 { 00:13:15.475 "name": "raid_bdev1", 00:13:15.475 "raid_level": "concat", 00:13:15.475 "base_bdevs": [ 00:13:15.475 "malloc1", 00:13:15.475 "malloc2", 00:13:15.475 "malloc3", 00:13:15.475 "malloc4" 00:13:15.475 ], 00:13:15.475 "strip_size_kb": 64, 00:13:15.475 "superblock": false, 00:13:15.475 "method": "bdev_raid_create", 00:13:15.475 "req_id": 1 00:13:15.475 } 00:13:15.476 Got JSON-RPC error response 00:13:15.476 response: 00:13:15.476 { 00:13:15.476 "code": -17, 00:13:15.476 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:15.476 } 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.476 [2024-10-08 16:21:08.610240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:15.476 [2024-10-08 16:21:08.610611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.476 [2024-10-08 16:21:08.610749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:15.476 [2024-10-08 16:21:08.610869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.476 [2024-10-08 16:21:08.613820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.476 [2024-10-08 16:21:08.613985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:15.476 [2024-10-08 16:21:08.614200] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:15.476 [2024-10-08 16:21:08.614391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:15.476 pt1 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.476 "name": "raid_bdev1", 00:13:15.476 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:15.476 "strip_size_kb": 64, 00:13:15.476 "state": "configuring", 00:13:15.476 "raid_level": "concat", 00:13:15.476 "superblock": true, 00:13:15.476 "num_base_bdevs": 4, 00:13:15.476 "num_base_bdevs_discovered": 1, 00:13:15.476 "num_base_bdevs_operational": 4, 00:13:15.476 "base_bdevs_list": [ 00:13:15.476 { 00:13:15.476 "name": "pt1", 00:13:15.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:15.476 "is_configured": true, 00:13:15.476 "data_offset": 2048, 00:13:15.476 "data_size": 63488 00:13:15.476 }, 00:13:15.476 { 00:13:15.476 "name": null, 00:13:15.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.476 "is_configured": false, 00:13:15.476 "data_offset": 2048, 00:13:15.476 "data_size": 63488 00:13:15.476 }, 00:13:15.476 { 00:13:15.476 "name": null, 00:13:15.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.476 "is_configured": false, 00:13:15.476 "data_offset": 2048, 00:13:15.476 "data_size": 63488 00:13:15.476 }, 00:13:15.476 { 00:13:15.476 "name": null, 00:13:15.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:15.476 "is_configured": false, 00:13:15.476 "data_offset": 2048, 00:13:15.476 "data_size": 63488 00:13:15.476 } 00:13:15.476 ] 00:13:15.476 }' 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.476 16:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.043 [2024-10-08 16:21:09.162496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:16.043 [2024-10-08 16:21:09.162946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.043 [2024-10-08 16:21:09.162987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:16.043 [2024-10-08 16:21:09.163008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.043 [2024-10-08 16:21:09.163658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.043 [2024-10-08 16:21:09.163690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:16.043 [2024-10-08 16:21:09.163800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:16.043 [2024-10-08 16:21:09.163839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.043 pt2 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.043 [2024-10-08 16:21:09.170455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.043 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.043 "name": "raid_bdev1", 00:13:16.043 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:16.043 "strip_size_kb": 64, 00:13:16.043 "state": "configuring", 00:13:16.043 "raid_level": "concat", 00:13:16.043 "superblock": true, 00:13:16.043 "num_base_bdevs": 4, 00:13:16.043 "num_base_bdevs_discovered": 1, 00:13:16.043 "num_base_bdevs_operational": 4, 00:13:16.043 "base_bdevs_list": [ 00:13:16.043 { 00:13:16.044 "name": "pt1", 00:13:16.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:16.044 "is_configured": true, 00:13:16.044 "data_offset": 2048, 00:13:16.044 "data_size": 63488 00:13:16.044 }, 00:13:16.044 { 00:13:16.044 "name": null, 00:13:16.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.044 "is_configured": false, 00:13:16.044 "data_offset": 0, 00:13:16.044 "data_size": 63488 00:13:16.044 }, 00:13:16.044 { 00:13:16.044 "name": null, 00:13:16.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.044 "is_configured": false, 00:13:16.044 "data_offset": 2048, 00:13:16.044 "data_size": 63488 00:13:16.044 }, 00:13:16.044 { 00:13:16.044 "name": null, 00:13:16.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:16.044 "is_configured": false, 00:13:16.044 "data_offset": 2048, 00:13:16.044 "data_size": 63488 00:13:16.044 } 00:13:16.044 ] 00:13:16.044 }' 00:13:16.044 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.044 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 [2024-10-08 16:21:09.710695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:16.610 [2024-10-08 16:21:09.711037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.610 [2024-10-08 16:21:09.711080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:16.610 [2024-10-08 16:21:09.711095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.610 [2024-10-08 16:21:09.711697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.610 [2024-10-08 16:21:09.711722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:16.610 [2024-10-08 16:21:09.711849] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:16.610 [2024-10-08 16:21:09.711881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.610 pt2 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 [2024-10-08 16:21:09.718653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:16.610 [2024-10-08 16:21:09.718887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.610 [2024-10-08 16:21:09.719060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:16.610 [2024-10-08 16:21:09.719184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.610 [2024-10-08 16:21:09.719739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.610 [2024-10-08 16:21:09.719885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:16.610 [2024-10-08 16:21:09.720075] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:16.610 [2024-10-08 16:21:09.720212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:16.610 pt3 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 [2024-10-08 16:21:09.730609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:16.610 [2024-10-08 16:21:09.730838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.610 [2024-10-08 16:21:09.730903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:16.610 [2024-10-08 16:21:09.731026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.610 [2024-10-08 16:21:09.731598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.610 [2024-10-08 16:21:09.731835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:16.610 [2024-10-08 16:21:09.732097] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:16.610 [2024-10-08 16:21:09.732135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:16.610 [2024-10-08 16:21:09.732315] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:16.610 [2024-10-08 16:21:09.732331] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:16.610 [2024-10-08 16:21:09.732686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:16.610 [2024-10-08 16:21:09.732899] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:16.610 [2024-10-08 16:21:09.732935] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:16.610 [2024-10-08 16:21:09.733088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.610 pt4 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.610 "name": "raid_bdev1", 00:13:16.610 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:16.610 "strip_size_kb": 64, 00:13:16.610 "state": "online", 00:13:16.610 "raid_level": "concat", 00:13:16.610 "superblock": true, 00:13:16.610 "num_base_bdevs": 4, 00:13:16.610 "num_base_bdevs_discovered": 4, 00:13:16.610 "num_base_bdevs_operational": 4, 00:13:16.610 "base_bdevs_list": [ 00:13:16.610 { 00:13:16.610 "name": "pt1", 00:13:16.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:16.610 "is_configured": true, 00:13:16.610 "data_offset": 2048, 00:13:16.610 "data_size": 63488 00:13:16.610 }, 00:13:16.610 { 00:13:16.610 "name": "pt2", 00:13:16.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.610 "is_configured": true, 00:13:16.610 "data_offset": 2048, 00:13:16.610 "data_size": 63488 00:13:16.610 }, 00:13:16.610 { 00:13:16.610 "name": "pt3", 00:13:16.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.610 "is_configured": true, 00:13:16.610 "data_offset": 2048, 00:13:16.610 "data_size": 63488 00:13:16.610 }, 00:13:16.610 { 00:13:16.610 "name": "pt4", 00:13:16.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:16.610 "is_configured": true, 00:13:16.610 "data_offset": 2048, 00:13:16.610 "data_size": 63488 00:13:16.610 } 00:13:16.610 ] 00:13:16.610 }' 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.610 16:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.177 [2024-10-08 16:21:10.311267] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:17.177 "name": "raid_bdev1", 00:13:17.177 "aliases": [ 00:13:17.177 "ff9b47a2-6bbe-4282-965c-0dfe801ed512" 00:13:17.177 ], 00:13:17.177 "product_name": "Raid Volume", 00:13:17.177 "block_size": 512, 00:13:17.177 "num_blocks": 253952, 00:13:17.177 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:17.177 "assigned_rate_limits": { 00:13:17.177 "rw_ios_per_sec": 0, 00:13:17.177 "rw_mbytes_per_sec": 0, 00:13:17.177 "r_mbytes_per_sec": 0, 00:13:17.177 "w_mbytes_per_sec": 0 00:13:17.177 }, 00:13:17.177 "claimed": false, 00:13:17.177 "zoned": false, 00:13:17.177 "supported_io_types": { 00:13:17.177 "read": true, 00:13:17.177 "write": true, 00:13:17.177 "unmap": true, 00:13:17.177 "flush": true, 00:13:17.177 "reset": true, 00:13:17.177 "nvme_admin": false, 00:13:17.177 "nvme_io": false, 00:13:17.177 "nvme_io_md": false, 00:13:17.177 "write_zeroes": true, 00:13:17.177 "zcopy": false, 00:13:17.177 "get_zone_info": false, 00:13:17.177 "zone_management": false, 00:13:17.177 "zone_append": false, 00:13:17.177 "compare": false, 00:13:17.177 "compare_and_write": false, 00:13:17.177 "abort": false, 00:13:17.177 "seek_hole": false, 00:13:17.177 "seek_data": false, 00:13:17.177 "copy": false, 00:13:17.177 "nvme_iov_md": false 00:13:17.177 }, 00:13:17.177 "memory_domains": [ 00:13:17.177 { 00:13:17.177 "dma_device_id": "system", 00:13:17.177 "dma_device_type": 1 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.177 "dma_device_type": 2 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "system", 00:13:17.177 "dma_device_type": 1 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.177 "dma_device_type": 2 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "system", 00:13:17.177 "dma_device_type": 1 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.177 "dma_device_type": 2 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "system", 00:13:17.177 "dma_device_type": 1 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.177 "dma_device_type": 2 00:13:17.177 } 00:13:17.177 ], 00:13:17.177 "driver_specific": { 00:13:17.177 "raid": { 00:13:17.177 "uuid": "ff9b47a2-6bbe-4282-965c-0dfe801ed512", 00:13:17.177 "strip_size_kb": 64, 00:13:17.177 "state": "online", 00:13:17.177 "raid_level": "concat", 00:13:17.177 "superblock": true, 00:13:17.177 "num_base_bdevs": 4, 00:13:17.177 "num_base_bdevs_discovered": 4, 00:13:17.177 "num_base_bdevs_operational": 4, 00:13:17.177 "base_bdevs_list": [ 00:13:17.177 { 00:13:17.177 "name": "pt1", 00:13:17.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.177 "is_configured": true, 00:13:17.177 "data_offset": 2048, 00:13:17.177 "data_size": 63488 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "name": "pt2", 00:13:17.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.177 "is_configured": true, 00:13:17.177 "data_offset": 2048, 00:13:17.177 "data_size": 63488 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "name": "pt3", 00:13:17.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.177 "is_configured": true, 00:13:17.177 "data_offset": 2048, 00:13:17.177 "data_size": 63488 00:13:17.177 }, 00:13:17.177 { 00:13:17.177 "name": "pt4", 00:13:17.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:17.177 "is_configured": true, 00:13:17.177 "data_offset": 2048, 00:13:17.177 "data_size": 63488 00:13:17.177 } 00:13:17.177 ] 00:13:17.177 } 00:13:17.177 } 00:13:17.177 }' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:17.177 pt2 00:13:17.177 pt3 00:13:17.177 pt4' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.177 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.435 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 [2024-10-08 16:21:10.691283] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ff9b47a2-6bbe-4282-965c-0dfe801ed512 '!=' ff9b47a2-6bbe-4282-965c-0dfe801ed512 ']' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73100 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73100 ']' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73100 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.436 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73100 00:13:17.694 killing process with pid 73100 00:13:17.694 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.694 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.694 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73100' 00:13:17.694 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73100 00:13:17.694 [2024-10-08 16:21:10.767359] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.694 16:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73100 00:13:17.694 [2024-10-08 16:21:10.767459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.694 [2024-10-08 16:21:10.767589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.694 [2024-10-08 16:21:10.767606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:18.009 [2024-10-08 16:21:11.104701] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.382 16:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:19.382 ************************************ 00:13:19.382 END TEST raid_superblock_test 00:13:19.382 ************************************ 00:13:19.382 00:13:19.382 real 0m6.248s 00:13:19.382 user 0m9.293s 00:13:19.382 sys 0m0.985s 00:13:19.382 16:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.382 16:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 16:21:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:19.382 16:21:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:19.382 16:21:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.382 16:21:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 ************************************ 00:13:19.382 START TEST raid_read_error_test 00:13:19.382 ************************************ 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:19.382 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.h4DL0sCh0g 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73366 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73366 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73366 ']' 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.383 16:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.383 [2024-10-08 16:21:12.478466] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:19.383 [2024-10-08 16:21:12.478670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73366 ] 00:13:19.383 [2024-10-08 16:21:12.652894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.640 [2024-10-08 16:21:12.884253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.899 [2024-10-08 16:21:13.082842] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.899 [2024-10-08 16:21:13.082904] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.157 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.157 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:20.157 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.157 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.157 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.157 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 BaseBdev1_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 true 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 [2024-10-08 16:21:13.525160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:20.476 [2024-10-08 16:21:13.525243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.476 [2024-10-08 16:21:13.525271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:20.476 [2024-10-08 16:21:13.525289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.476 [2024-10-08 16:21:13.528095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.476 [2024-10-08 16:21:13.528145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.476 BaseBdev1 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 BaseBdev2_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 true 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 [2024-10-08 16:21:13.595091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:20.476 [2024-10-08 16:21:13.595181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.476 [2024-10-08 16:21:13.595215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:20.476 [2024-10-08 16:21:13.595233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.476 [2024-10-08 16:21:13.598068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.476 [2024-10-08 16:21:13.598120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.476 BaseBdev2 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 BaseBdev3_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 true 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.476 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 [2024-10-08 16:21:13.651799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:20.476 [2024-10-08 16:21:13.651902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.476 [2024-10-08 16:21:13.651928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:20.477 [2024-10-08 16:21:13.651945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.477 [2024-10-08 16:21:13.654859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.477 [2024-10-08 16:21:13.654924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:20.477 BaseBdev3 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 BaseBdev4_malloc 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 true 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 [2024-10-08 16:21:13.709156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:20.477 [2024-10-08 16:21:13.709263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.477 [2024-10-08 16:21:13.709297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:20.477 [2024-10-08 16:21:13.709316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.477 [2024-10-08 16:21:13.712286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.477 [2024-10-08 16:21:13.712341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:20.477 BaseBdev4 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 [2024-10-08 16:21:13.717324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.477 [2024-10-08 16:21:13.719848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.477 [2024-10-08 16:21:13.719962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.477 [2024-10-08 16:21:13.720088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.477 [2024-10-08 16:21:13.720387] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:20.477 [2024-10-08 16:21:13.720410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:20.477 [2024-10-08 16:21:13.720780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:20.477 [2024-10-08 16:21:13.721013] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:20.477 [2024-10-08 16:21:13.721044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:20.477 [2024-10-08 16:21:13.721303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.734 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.734 "name": "raid_bdev1", 00:13:20.734 "uuid": "a389d775-baa3-467d-98ec-0e46878fc9c6", 00:13:20.734 "strip_size_kb": 64, 00:13:20.734 "state": "online", 00:13:20.734 "raid_level": "concat", 00:13:20.734 "superblock": true, 00:13:20.734 "num_base_bdevs": 4, 00:13:20.734 "num_base_bdevs_discovered": 4, 00:13:20.734 "num_base_bdevs_operational": 4, 00:13:20.734 "base_bdevs_list": [ 00:13:20.734 { 00:13:20.734 "name": "BaseBdev1", 00:13:20.734 "uuid": "36c04e6e-f265-56d4-bad7-74d3a1ba29d4", 00:13:20.734 "is_configured": true, 00:13:20.734 "data_offset": 2048, 00:13:20.734 "data_size": 63488 00:13:20.734 }, 00:13:20.734 { 00:13:20.734 "name": "BaseBdev2", 00:13:20.734 "uuid": "33d8065a-7320-5ad0-9378-871ee0b53430", 00:13:20.734 "is_configured": true, 00:13:20.734 "data_offset": 2048, 00:13:20.734 "data_size": 63488 00:13:20.734 }, 00:13:20.734 { 00:13:20.734 "name": "BaseBdev3", 00:13:20.734 "uuid": "69ae6823-4535-5f70-a065-4c5c77c5db96", 00:13:20.734 "is_configured": true, 00:13:20.734 "data_offset": 2048, 00:13:20.734 "data_size": 63488 00:13:20.734 }, 00:13:20.734 { 00:13:20.734 "name": "BaseBdev4", 00:13:20.734 "uuid": "95ac05eb-e912-5def-850d-4b20cd71512f", 00:13:20.734 "is_configured": true, 00:13:20.734 "data_offset": 2048, 00:13:20.735 "data_size": 63488 00:13:20.735 } 00:13:20.735 ] 00:13:20.735 }' 00:13:20.735 16:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.735 16:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 16:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:20.992 16:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:21.249 [2024-10-08 16:21:14.382880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.182 "name": "raid_bdev1", 00:13:22.182 "uuid": "a389d775-baa3-467d-98ec-0e46878fc9c6", 00:13:22.182 "strip_size_kb": 64, 00:13:22.182 "state": "online", 00:13:22.182 "raid_level": "concat", 00:13:22.182 "superblock": true, 00:13:22.182 "num_base_bdevs": 4, 00:13:22.182 "num_base_bdevs_discovered": 4, 00:13:22.182 "num_base_bdevs_operational": 4, 00:13:22.182 "base_bdevs_list": [ 00:13:22.182 { 00:13:22.182 "name": "BaseBdev1", 00:13:22.182 "uuid": "36c04e6e-f265-56d4-bad7-74d3a1ba29d4", 00:13:22.182 "is_configured": true, 00:13:22.182 "data_offset": 2048, 00:13:22.182 "data_size": 63488 00:13:22.182 }, 00:13:22.182 { 00:13:22.182 "name": "BaseBdev2", 00:13:22.182 "uuid": "33d8065a-7320-5ad0-9378-871ee0b53430", 00:13:22.182 "is_configured": true, 00:13:22.182 "data_offset": 2048, 00:13:22.182 "data_size": 63488 00:13:22.182 }, 00:13:22.182 { 00:13:22.182 "name": "BaseBdev3", 00:13:22.182 "uuid": "69ae6823-4535-5f70-a065-4c5c77c5db96", 00:13:22.182 "is_configured": true, 00:13:22.182 "data_offset": 2048, 00:13:22.182 "data_size": 63488 00:13:22.182 }, 00:13:22.182 { 00:13:22.182 "name": "BaseBdev4", 00:13:22.182 "uuid": "95ac05eb-e912-5def-850d-4b20cd71512f", 00:13:22.182 "is_configured": true, 00:13:22.182 "data_offset": 2048, 00:13:22.182 "data_size": 63488 00:13:22.182 } 00:13:22.182 ] 00:13:22.182 }' 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.182 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.772 [2024-10-08 16:21:15.798222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.772 [2024-10-08 16:21:15.798291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.772 [2024-10-08 16:21:15.801584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.772 [2024-10-08 16:21:15.801673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.772 [2024-10-08 16:21:15.801731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.772 [2024-10-08 16:21:15.801764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:22.772 { 00:13:22.772 "results": [ 00:13:22.772 { 00:13:22.772 "job": "raid_bdev1", 00:13:22.772 "core_mask": "0x1", 00:13:22.772 "workload": "randrw", 00:13:22.772 "percentage": 50, 00:13:22.772 "status": "finished", 00:13:22.772 "queue_depth": 1, 00:13:22.772 "io_size": 131072, 00:13:22.772 "runtime": 1.412731, 00:13:22.772 "iops": 10815.930279720626, 00:13:22.772 "mibps": 1351.9912849650782, 00:13:22.772 "io_failed": 1, 00:13:22.772 "io_timeout": 0, 00:13:22.772 "avg_latency_us": 129.44714755697805, 00:13:22.772 "min_latency_us": 37.93454545454546, 00:13:22.772 "max_latency_us": 1854.370909090909 00:13:22.772 } 00:13:22.772 ], 00:13:22.772 "core_count": 1 00:13:22.772 } 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73366 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73366 ']' 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73366 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73366 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:22.772 killing process with pid 73366 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73366' 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73366 00:13:22.772 [2024-10-08 16:21:15.834457] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.772 16:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73366 00:13:23.031 [2024-10-08 16:21:16.125773] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.h4DL0sCh0g 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:24.406 00:13:24.406 real 0m5.047s 00:13:24.406 user 0m6.184s 00:13:24.406 sys 0m0.628s 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.406 ************************************ 00:13:24.406 END TEST raid_read_error_test 00:13:24.406 ************************************ 00:13:24.406 16:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.406 16:21:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:24.406 16:21:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:24.406 16:21:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.406 16:21:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.406 ************************************ 00:13:24.406 START TEST raid_write_error_test 00:13:24.406 ************************************ 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4FOG0JAdZj 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73516 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73516 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73516 ']' 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.406 16:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.406 [2024-10-08 16:21:17.589656] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:24.406 [2024-10-08 16:21:17.589844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73516 ] 00:13:24.665 [2024-10-08 16:21:17.757803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.922 [2024-10-08 16:21:18.006943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.922 [2024-10-08 16:21:18.212954] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.922 [2024-10-08 16:21:18.213044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.487 BaseBdev1_malloc 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.487 true 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.487 [2024-10-08 16:21:18.701610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:25.487 [2024-10-08 16:21:18.701684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.487 [2024-10-08 16:21:18.701710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:25.487 [2024-10-08 16:21:18.701728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.487 [2024-10-08 16:21:18.704575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.487 [2024-10-08 16:21:18.704625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:25.487 BaseBdev1 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.487 BaseBdev2_malloc 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.487 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.487 true 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.488 [2024-10-08 16:21:18.772646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:25.488 [2024-10-08 16:21:18.772747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.488 [2024-10-08 16:21:18.772778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:25.488 [2024-10-08 16:21:18.772797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.488 [2024-10-08 16:21:18.775801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.488 [2024-10-08 16:21:18.775846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.488 BaseBdev2 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.488 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 BaseBdev3_malloc 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 true 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 [2024-10-08 16:21:18.829690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:25.747 [2024-10-08 16:21:18.829773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.747 [2024-10-08 16:21:18.829802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:25.747 [2024-10-08 16:21:18.829820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.747 [2024-10-08 16:21:18.832769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.747 [2024-10-08 16:21:18.832823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.747 BaseBdev3 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 BaseBdev4_malloc 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 true 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 [2024-10-08 16:21:18.891205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:25.747 [2024-10-08 16:21:18.891296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.747 [2024-10-08 16:21:18.891325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:25.747 [2024-10-08 16:21:18.891345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.747 [2024-10-08 16:21:18.894270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.747 [2024-10-08 16:21:18.894313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:25.747 BaseBdev4 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 [2024-10-08 16:21:18.899340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.747 [2024-10-08 16:21:18.901846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.747 [2024-10-08 16:21:18.901962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.747 [2024-10-08 16:21:18.902057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.747 [2024-10-08 16:21:18.902365] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:25.747 [2024-10-08 16:21:18.902399] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:25.747 [2024-10-08 16:21:18.902744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:25.747 [2024-10-08 16:21:18.902964] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:25.747 [2024-10-08 16:21:18.902988] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:25.747 [2024-10-08 16:21:18.903231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.747 "name": "raid_bdev1", 00:13:25.747 "uuid": "bce59f25-4de8-4695-b830-d4bfac2ea33b", 00:13:25.747 "strip_size_kb": 64, 00:13:25.747 "state": "online", 00:13:25.747 "raid_level": "concat", 00:13:25.747 "superblock": true, 00:13:25.747 "num_base_bdevs": 4, 00:13:25.747 "num_base_bdevs_discovered": 4, 00:13:25.747 "num_base_bdevs_operational": 4, 00:13:25.747 "base_bdevs_list": [ 00:13:25.747 { 00:13:25.747 "name": "BaseBdev1", 00:13:25.747 "uuid": "eec933a6-0d4b-5c93-903f-e5f23c9d9f66", 00:13:25.747 "is_configured": true, 00:13:25.747 "data_offset": 2048, 00:13:25.747 "data_size": 63488 00:13:25.747 }, 00:13:25.747 { 00:13:25.747 "name": "BaseBdev2", 00:13:25.747 "uuid": "49029006-c355-544c-8f5f-5542acb4ff93", 00:13:25.747 "is_configured": true, 00:13:25.747 "data_offset": 2048, 00:13:25.747 "data_size": 63488 00:13:25.747 }, 00:13:25.747 { 00:13:25.747 "name": "BaseBdev3", 00:13:25.747 "uuid": "7fb8bdce-c7a9-5961-857d-c6b32b0ead0c", 00:13:25.747 "is_configured": true, 00:13:25.747 "data_offset": 2048, 00:13:25.747 "data_size": 63488 00:13:25.747 }, 00:13:25.747 { 00:13:25.747 "name": "BaseBdev4", 00:13:25.747 "uuid": "d3a15fa3-f736-5a4b-8bb8-dad2fc0f62ff", 00:13:25.747 "is_configured": true, 00:13:25.747 "data_offset": 2048, 00:13:25.747 "data_size": 63488 00:13:25.747 } 00:13:25.747 ] 00:13:25.747 }' 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.747 16:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.315 16:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:26.315 16:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:26.315 [2024-10-08 16:21:19.549640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.249 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.250 "name": "raid_bdev1", 00:13:27.250 "uuid": "bce59f25-4de8-4695-b830-d4bfac2ea33b", 00:13:27.250 "strip_size_kb": 64, 00:13:27.250 "state": "online", 00:13:27.250 "raid_level": "concat", 00:13:27.250 "superblock": true, 00:13:27.250 "num_base_bdevs": 4, 00:13:27.250 "num_base_bdevs_discovered": 4, 00:13:27.250 "num_base_bdevs_operational": 4, 00:13:27.250 "base_bdevs_list": [ 00:13:27.250 { 00:13:27.250 "name": "BaseBdev1", 00:13:27.250 "uuid": "eec933a6-0d4b-5c93-903f-e5f23c9d9f66", 00:13:27.250 "is_configured": true, 00:13:27.250 "data_offset": 2048, 00:13:27.250 "data_size": 63488 00:13:27.250 }, 00:13:27.250 { 00:13:27.250 "name": "BaseBdev2", 00:13:27.250 "uuid": "49029006-c355-544c-8f5f-5542acb4ff93", 00:13:27.250 "is_configured": true, 00:13:27.250 "data_offset": 2048, 00:13:27.250 "data_size": 63488 00:13:27.250 }, 00:13:27.250 { 00:13:27.250 "name": "BaseBdev3", 00:13:27.250 "uuid": "7fb8bdce-c7a9-5961-857d-c6b32b0ead0c", 00:13:27.250 "is_configured": true, 00:13:27.250 "data_offset": 2048, 00:13:27.250 "data_size": 63488 00:13:27.250 }, 00:13:27.250 { 00:13:27.250 "name": "BaseBdev4", 00:13:27.250 "uuid": "d3a15fa3-f736-5a4b-8bb8-dad2fc0f62ff", 00:13:27.250 "is_configured": true, 00:13:27.250 "data_offset": 2048, 00:13:27.250 "data_size": 63488 00:13:27.250 } 00:13:27.250 ] 00:13:27.250 }' 00:13:27.250 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.250 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.816 [2024-10-08 16:21:20.960798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.816 [2024-10-08 16:21:20.960870] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.816 [2024-10-08 16:21:20.964179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.816 [2024-10-08 16:21:20.964255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.816 [2024-10-08 16:21:20.964317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.816 [2024-10-08 16:21:20.964336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.816 { 00:13:27.816 "results": [ 00:13:27.816 { 00:13:27.816 "job": "raid_bdev1", 00:13:27.816 "core_mask": "0x1", 00:13:27.816 "workload": "randrw", 00:13:27.816 "percentage": 50, 00:13:27.816 "status": "finished", 00:13:27.816 "queue_depth": 1, 00:13:27.816 "io_size": 131072, 00:13:27.816 "runtime": 1.408224, 00:13:27.816 "iops": 10729.116958665667, 00:13:27.816 "mibps": 1341.1396198332084, 00:13:27.816 "io_failed": 1, 00:13:27.816 "io_timeout": 0, 00:13:27.816 "avg_latency_us": 130.46583863786776, 00:13:27.816 "min_latency_us": 40.49454545454545, 00:13:27.816 "max_latency_us": 2115.0254545454545 00:13:27.816 } 00:13:27.816 ], 00:13:27.816 "core_count": 1 00:13:27.816 } 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73516 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73516 ']' 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73516 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73516 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.816 killing process with pid 73516 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73516' 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73516 00:13:27.816 [2024-10-08 16:21:20.999312] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.816 16:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73516 00:13:28.075 [2024-10-08 16:21:21.293383] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4FOG0JAdZj 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:29.498 00:13:29.498 real 0m5.116s 00:13:29.498 user 0m6.284s 00:13:29.498 sys 0m0.626s 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.498 16:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.498 ************************************ 00:13:29.498 END TEST raid_write_error_test 00:13:29.498 ************************************ 00:13:29.498 16:21:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:29.498 16:21:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:29.498 16:21:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:29.498 16:21:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.498 16:21:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.498 ************************************ 00:13:29.498 START TEST raid_state_function_test 00:13:29.498 ************************************ 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73664 00:13:29.498 Process raid pid: 73664 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73664' 00:13:29.498 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73664 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73664 ']' 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.499 16:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.499 [2024-10-08 16:21:22.772921] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:29.499 [2024-10-08 16:21:22.773122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.758 [2024-10-08 16:21:22.963639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.017 [2024-10-08 16:21:23.299114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.276 [2024-10-08 16:21:23.500804] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.276 [2024-10-08 16:21:23.500864] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.535 [2024-10-08 16:21:23.824497] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.535 [2024-10-08 16:21:23.824600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.535 [2024-10-08 16:21:23.824617] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.535 [2024-10-08 16:21:23.824635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.535 [2024-10-08 16:21:23.824645] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.535 [2024-10-08 16:21:23.824660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.535 [2024-10-08 16:21:23.824670] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:30.535 [2024-10-08 16:21:23.824685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.535 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.796 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.796 "name": "Existed_Raid", 00:13:30.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.796 "strip_size_kb": 0, 00:13:30.796 "state": "configuring", 00:13:30.796 "raid_level": "raid1", 00:13:30.796 "superblock": false, 00:13:30.796 "num_base_bdevs": 4, 00:13:30.796 "num_base_bdevs_discovered": 0, 00:13:30.796 "num_base_bdevs_operational": 4, 00:13:30.796 "base_bdevs_list": [ 00:13:30.796 { 00:13:30.796 "name": "BaseBdev1", 00:13:30.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.796 "is_configured": false, 00:13:30.796 "data_offset": 0, 00:13:30.796 "data_size": 0 00:13:30.796 }, 00:13:30.796 { 00:13:30.796 "name": "BaseBdev2", 00:13:30.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.796 "is_configured": false, 00:13:30.796 "data_offset": 0, 00:13:30.796 "data_size": 0 00:13:30.796 }, 00:13:30.796 { 00:13:30.796 "name": "BaseBdev3", 00:13:30.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.796 "is_configured": false, 00:13:30.796 "data_offset": 0, 00:13:30.796 "data_size": 0 00:13:30.796 }, 00:13:30.796 { 00:13:30.796 "name": "BaseBdev4", 00:13:30.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.797 "is_configured": false, 00:13:30.797 "data_offset": 0, 00:13:30.797 "data_size": 0 00:13:30.797 } 00:13:30.797 ] 00:13:30.797 }' 00:13:30.797 16:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.797 16:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.055 [2024-10-08 16:21:24.360524] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.055 [2024-10-08 16:21:24.360594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.055 [2024-10-08 16:21:24.368537] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.055 [2024-10-08 16:21:24.368582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.055 [2024-10-08 16:21:24.368596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.055 [2024-10-08 16:21:24.368612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.055 [2024-10-08 16:21:24.368621] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.055 [2024-10-08 16:21:24.368644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.055 [2024-10-08 16:21:24.368654] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:31.055 [2024-10-08 16:21:24.368667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.055 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.313 [2024-10-08 16:21:24.439551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.313 BaseBdev1 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.313 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.313 [ 00:13:31.313 { 00:13:31.313 "name": "BaseBdev1", 00:13:31.313 "aliases": [ 00:13:31.313 "0a48f031-edd5-48a3-b016-853726af7ceb" 00:13:31.313 ], 00:13:31.313 "product_name": "Malloc disk", 00:13:31.313 "block_size": 512, 00:13:31.313 "num_blocks": 65536, 00:13:31.313 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:31.313 "assigned_rate_limits": { 00:13:31.313 "rw_ios_per_sec": 0, 00:13:31.313 "rw_mbytes_per_sec": 0, 00:13:31.313 "r_mbytes_per_sec": 0, 00:13:31.313 "w_mbytes_per_sec": 0 00:13:31.313 }, 00:13:31.313 "claimed": true, 00:13:31.313 "claim_type": "exclusive_write", 00:13:31.313 "zoned": false, 00:13:31.313 "supported_io_types": { 00:13:31.313 "read": true, 00:13:31.313 "write": true, 00:13:31.313 "unmap": true, 00:13:31.313 "flush": true, 00:13:31.313 "reset": true, 00:13:31.313 "nvme_admin": false, 00:13:31.313 "nvme_io": false, 00:13:31.313 "nvme_io_md": false, 00:13:31.313 "write_zeroes": true, 00:13:31.313 "zcopy": true, 00:13:31.313 "get_zone_info": false, 00:13:31.313 "zone_management": false, 00:13:31.313 "zone_append": false, 00:13:31.313 "compare": false, 00:13:31.313 "compare_and_write": false, 00:13:31.313 "abort": true, 00:13:31.313 "seek_hole": false, 00:13:31.313 "seek_data": false, 00:13:31.314 "copy": true, 00:13:31.314 "nvme_iov_md": false 00:13:31.314 }, 00:13:31.314 "memory_domains": [ 00:13:31.314 { 00:13:31.314 "dma_device_id": "system", 00:13:31.314 "dma_device_type": 1 00:13:31.314 }, 00:13:31.314 { 00:13:31.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.314 "dma_device_type": 2 00:13:31.314 } 00:13:31.314 ], 00:13:31.314 "driver_specific": {} 00:13:31.314 } 00:13:31.314 ] 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.314 "name": "Existed_Raid", 00:13:31.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.314 "strip_size_kb": 0, 00:13:31.314 "state": "configuring", 00:13:31.314 "raid_level": "raid1", 00:13:31.314 "superblock": false, 00:13:31.314 "num_base_bdevs": 4, 00:13:31.314 "num_base_bdevs_discovered": 1, 00:13:31.314 "num_base_bdevs_operational": 4, 00:13:31.314 "base_bdevs_list": [ 00:13:31.314 { 00:13:31.314 "name": "BaseBdev1", 00:13:31.314 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:31.314 "is_configured": true, 00:13:31.314 "data_offset": 0, 00:13:31.314 "data_size": 65536 00:13:31.314 }, 00:13:31.314 { 00:13:31.314 "name": "BaseBdev2", 00:13:31.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.314 "is_configured": false, 00:13:31.314 "data_offset": 0, 00:13:31.314 "data_size": 0 00:13:31.314 }, 00:13:31.314 { 00:13:31.314 "name": "BaseBdev3", 00:13:31.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.314 "is_configured": false, 00:13:31.314 "data_offset": 0, 00:13:31.314 "data_size": 0 00:13:31.314 }, 00:13:31.314 { 00:13:31.314 "name": "BaseBdev4", 00:13:31.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.314 "is_configured": false, 00:13:31.314 "data_offset": 0, 00:13:31.314 "data_size": 0 00:13:31.314 } 00:13:31.314 ] 00:13:31.314 }' 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.314 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 [2024-10-08 16:21:24.991825] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.879 [2024-10-08 16:21:24.991914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.879 16:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 [2024-10-08 16:21:24.999837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.879 [2024-10-08 16:21:25.002212] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.879 [2024-10-08 16:21:25.002273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.879 [2024-10-08 16:21:25.002303] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.879 [2024-10-08 16:21:25.002320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.879 [2024-10-08 16:21:25.002331] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:31.879 [2024-10-08 16:21:25.002344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.879 "name": "Existed_Raid", 00:13:31.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.879 "strip_size_kb": 0, 00:13:31.879 "state": "configuring", 00:13:31.879 "raid_level": "raid1", 00:13:31.879 "superblock": false, 00:13:31.879 "num_base_bdevs": 4, 00:13:31.879 "num_base_bdevs_discovered": 1, 00:13:31.879 "num_base_bdevs_operational": 4, 00:13:31.879 "base_bdevs_list": [ 00:13:31.879 { 00:13:31.879 "name": "BaseBdev1", 00:13:31.879 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:31.879 "is_configured": true, 00:13:31.879 "data_offset": 0, 00:13:31.879 "data_size": 65536 00:13:31.879 }, 00:13:31.879 { 00:13:31.879 "name": "BaseBdev2", 00:13:31.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.879 "is_configured": false, 00:13:31.879 "data_offset": 0, 00:13:31.879 "data_size": 0 00:13:31.879 }, 00:13:31.879 { 00:13:31.879 "name": "BaseBdev3", 00:13:31.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.879 "is_configured": false, 00:13:31.879 "data_offset": 0, 00:13:31.879 "data_size": 0 00:13:31.879 }, 00:13:31.879 { 00:13:31.879 "name": "BaseBdev4", 00:13:31.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.879 "is_configured": false, 00:13:31.879 "data_offset": 0, 00:13:31.879 "data_size": 0 00:13:31.879 } 00:13:31.879 ] 00:13:31.879 }' 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.879 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.445 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 [2024-10-08 16:21:25.563764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.446 BaseBdev2 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 [ 00:13:32.446 { 00:13:32.446 "name": "BaseBdev2", 00:13:32.446 "aliases": [ 00:13:32.446 "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5" 00:13:32.446 ], 00:13:32.446 "product_name": "Malloc disk", 00:13:32.446 "block_size": 512, 00:13:32.446 "num_blocks": 65536, 00:13:32.446 "uuid": "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5", 00:13:32.446 "assigned_rate_limits": { 00:13:32.446 "rw_ios_per_sec": 0, 00:13:32.446 "rw_mbytes_per_sec": 0, 00:13:32.446 "r_mbytes_per_sec": 0, 00:13:32.446 "w_mbytes_per_sec": 0 00:13:32.446 }, 00:13:32.446 "claimed": true, 00:13:32.446 "claim_type": "exclusive_write", 00:13:32.446 "zoned": false, 00:13:32.446 "supported_io_types": { 00:13:32.446 "read": true, 00:13:32.446 "write": true, 00:13:32.446 "unmap": true, 00:13:32.446 "flush": true, 00:13:32.446 "reset": true, 00:13:32.446 "nvme_admin": false, 00:13:32.446 "nvme_io": false, 00:13:32.446 "nvme_io_md": false, 00:13:32.446 "write_zeroes": true, 00:13:32.446 "zcopy": true, 00:13:32.446 "get_zone_info": false, 00:13:32.446 "zone_management": false, 00:13:32.446 "zone_append": false, 00:13:32.446 "compare": false, 00:13:32.446 "compare_and_write": false, 00:13:32.446 "abort": true, 00:13:32.446 "seek_hole": false, 00:13:32.446 "seek_data": false, 00:13:32.446 "copy": true, 00:13:32.446 "nvme_iov_md": false 00:13:32.446 }, 00:13:32.446 "memory_domains": [ 00:13:32.446 { 00:13:32.446 "dma_device_id": "system", 00:13:32.446 "dma_device_type": 1 00:13:32.446 }, 00:13:32.446 { 00:13:32.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.446 "dma_device_type": 2 00:13:32.446 } 00:13:32.446 ], 00:13:32.446 "driver_specific": {} 00:13:32.446 } 00:13:32.446 ] 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.446 "name": "Existed_Raid", 00:13:32.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.446 "strip_size_kb": 0, 00:13:32.446 "state": "configuring", 00:13:32.446 "raid_level": "raid1", 00:13:32.446 "superblock": false, 00:13:32.446 "num_base_bdevs": 4, 00:13:32.446 "num_base_bdevs_discovered": 2, 00:13:32.446 "num_base_bdevs_operational": 4, 00:13:32.446 "base_bdevs_list": [ 00:13:32.446 { 00:13:32.446 "name": "BaseBdev1", 00:13:32.446 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:32.446 "is_configured": true, 00:13:32.446 "data_offset": 0, 00:13:32.446 "data_size": 65536 00:13:32.446 }, 00:13:32.446 { 00:13:32.446 "name": "BaseBdev2", 00:13:32.446 "uuid": "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5", 00:13:32.446 "is_configured": true, 00:13:32.446 "data_offset": 0, 00:13:32.446 "data_size": 65536 00:13:32.446 }, 00:13:32.446 { 00:13:32.446 "name": "BaseBdev3", 00:13:32.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.446 "is_configured": false, 00:13:32.446 "data_offset": 0, 00:13:32.446 "data_size": 0 00:13:32.446 }, 00:13:32.446 { 00:13:32.446 "name": "BaseBdev4", 00:13:32.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.446 "is_configured": false, 00:13:32.446 "data_offset": 0, 00:13:32.446 "data_size": 0 00:13:32.446 } 00:13:32.446 ] 00:13:32.446 }' 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.446 16:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.013 [2024-10-08 16:21:26.138778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.013 BaseBdev3 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.013 [ 00:13:33.013 { 00:13:33.013 "name": "BaseBdev3", 00:13:33.013 "aliases": [ 00:13:33.013 "4d93c363-90c6-46b8-bb5b-6c77780d8942" 00:13:33.013 ], 00:13:33.013 "product_name": "Malloc disk", 00:13:33.013 "block_size": 512, 00:13:33.013 "num_blocks": 65536, 00:13:33.013 "uuid": "4d93c363-90c6-46b8-bb5b-6c77780d8942", 00:13:33.013 "assigned_rate_limits": { 00:13:33.013 "rw_ios_per_sec": 0, 00:13:33.013 "rw_mbytes_per_sec": 0, 00:13:33.013 "r_mbytes_per_sec": 0, 00:13:33.013 "w_mbytes_per_sec": 0 00:13:33.013 }, 00:13:33.013 "claimed": true, 00:13:33.013 "claim_type": "exclusive_write", 00:13:33.013 "zoned": false, 00:13:33.013 "supported_io_types": { 00:13:33.013 "read": true, 00:13:33.013 "write": true, 00:13:33.013 "unmap": true, 00:13:33.013 "flush": true, 00:13:33.013 "reset": true, 00:13:33.013 "nvme_admin": false, 00:13:33.013 "nvme_io": false, 00:13:33.013 "nvme_io_md": false, 00:13:33.013 "write_zeroes": true, 00:13:33.013 "zcopy": true, 00:13:33.013 "get_zone_info": false, 00:13:33.013 "zone_management": false, 00:13:33.013 "zone_append": false, 00:13:33.013 "compare": false, 00:13:33.013 "compare_and_write": false, 00:13:33.013 "abort": true, 00:13:33.013 "seek_hole": false, 00:13:33.013 "seek_data": false, 00:13:33.013 "copy": true, 00:13:33.013 "nvme_iov_md": false 00:13:33.013 }, 00:13:33.013 "memory_domains": [ 00:13:33.013 { 00:13:33.013 "dma_device_id": "system", 00:13:33.013 "dma_device_type": 1 00:13:33.013 }, 00:13:33.013 { 00:13:33.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.013 "dma_device_type": 2 00:13:33.013 } 00:13:33.013 ], 00:13:33.013 "driver_specific": {} 00:13:33.013 } 00:13:33.013 ] 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.013 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.014 "name": "Existed_Raid", 00:13:33.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.014 "strip_size_kb": 0, 00:13:33.014 "state": "configuring", 00:13:33.014 "raid_level": "raid1", 00:13:33.014 "superblock": false, 00:13:33.014 "num_base_bdevs": 4, 00:13:33.014 "num_base_bdevs_discovered": 3, 00:13:33.014 "num_base_bdevs_operational": 4, 00:13:33.014 "base_bdevs_list": [ 00:13:33.014 { 00:13:33.014 "name": "BaseBdev1", 00:13:33.014 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:33.014 "is_configured": true, 00:13:33.014 "data_offset": 0, 00:13:33.014 "data_size": 65536 00:13:33.014 }, 00:13:33.014 { 00:13:33.014 "name": "BaseBdev2", 00:13:33.014 "uuid": "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5", 00:13:33.014 "is_configured": true, 00:13:33.014 "data_offset": 0, 00:13:33.014 "data_size": 65536 00:13:33.014 }, 00:13:33.014 { 00:13:33.014 "name": "BaseBdev3", 00:13:33.014 "uuid": "4d93c363-90c6-46b8-bb5b-6c77780d8942", 00:13:33.014 "is_configured": true, 00:13:33.014 "data_offset": 0, 00:13:33.014 "data_size": 65536 00:13:33.014 }, 00:13:33.014 { 00:13:33.014 "name": "BaseBdev4", 00:13:33.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.014 "is_configured": false, 00:13:33.014 "data_offset": 0, 00:13:33.014 "data_size": 0 00:13:33.014 } 00:13:33.014 ] 00:13:33.014 }' 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.014 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.581 [2024-10-08 16:21:26.727591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.581 [2024-10-08 16:21:26.727675] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:33.581 [2024-10-08 16:21:26.727687] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:33.581 [2024-10-08 16:21:26.728008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:33.581 [2024-10-08 16:21:26.728208] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:33.581 [2024-10-08 16:21:26.728238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:33.581 [2024-10-08 16:21:26.728586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.581 BaseBdev4 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.581 [ 00:13:33.581 { 00:13:33.581 "name": "BaseBdev4", 00:13:33.581 "aliases": [ 00:13:33.581 "d35e5f22-2f75-4c3a-8ca7-117643a0f181" 00:13:33.581 ], 00:13:33.581 "product_name": "Malloc disk", 00:13:33.581 "block_size": 512, 00:13:33.581 "num_blocks": 65536, 00:13:33.581 "uuid": "d35e5f22-2f75-4c3a-8ca7-117643a0f181", 00:13:33.581 "assigned_rate_limits": { 00:13:33.581 "rw_ios_per_sec": 0, 00:13:33.581 "rw_mbytes_per_sec": 0, 00:13:33.581 "r_mbytes_per_sec": 0, 00:13:33.581 "w_mbytes_per_sec": 0 00:13:33.581 }, 00:13:33.581 "claimed": true, 00:13:33.581 "claim_type": "exclusive_write", 00:13:33.581 "zoned": false, 00:13:33.581 "supported_io_types": { 00:13:33.581 "read": true, 00:13:33.581 "write": true, 00:13:33.581 "unmap": true, 00:13:33.581 "flush": true, 00:13:33.581 "reset": true, 00:13:33.581 "nvme_admin": false, 00:13:33.581 "nvme_io": false, 00:13:33.581 "nvme_io_md": false, 00:13:33.581 "write_zeroes": true, 00:13:33.581 "zcopy": true, 00:13:33.581 "get_zone_info": false, 00:13:33.581 "zone_management": false, 00:13:33.581 "zone_append": false, 00:13:33.581 "compare": false, 00:13:33.581 "compare_and_write": false, 00:13:33.581 "abort": true, 00:13:33.581 "seek_hole": false, 00:13:33.581 "seek_data": false, 00:13:33.581 "copy": true, 00:13:33.581 "nvme_iov_md": false 00:13:33.581 }, 00:13:33.581 "memory_domains": [ 00:13:33.581 { 00:13:33.581 "dma_device_id": "system", 00:13:33.581 "dma_device_type": 1 00:13:33.581 }, 00:13:33.581 { 00:13:33.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.581 "dma_device_type": 2 00:13:33.581 } 00:13:33.581 ], 00:13:33.581 "driver_specific": {} 00:13:33.581 } 00:13:33.581 ] 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.581 "name": "Existed_Raid", 00:13:33.581 "uuid": "0aa9106c-0e00-4a6c-9aed-0bab83c441b0", 00:13:33.581 "strip_size_kb": 0, 00:13:33.581 "state": "online", 00:13:33.581 "raid_level": "raid1", 00:13:33.581 "superblock": false, 00:13:33.581 "num_base_bdevs": 4, 00:13:33.581 "num_base_bdevs_discovered": 4, 00:13:33.581 "num_base_bdevs_operational": 4, 00:13:33.581 "base_bdevs_list": [ 00:13:33.581 { 00:13:33.581 "name": "BaseBdev1", 00:13:33.581 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:33.581 "is_configured": true, 00:13:33.581 "data_offset": 0, 00:13:33.581 "data_size": 65536 00:13:33.581 }, 00:13:33.581 { 00:13:33.581 "name": "BaseBdev2", 00:13:33.581 "uuid": "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5", 00:13:33.581 "is_configured": true, 00:13:33.581 "data_offset": 0, 00:13:33.581 "data_size": 65536 00:13:33.581 }, 00:13:33.581 { 00:13:33.581 "name": "BaseBdev3", 00:13:33.581 "uuid": "4d93c363-90c6-46b8-bb5b-6c77780d8942", 00:13:33.581 "is_configured": true, 00:13:33.581 "data_offset": 0, 00:13:33.581 "data_size": 65536 00:13:33.581 }, 00:13:33.581 { 00:13:33.581 "name": "BaseBdev4", 00:13:33.581 "uuid": "d35e5f22-2f75-4c3a-8ca7-117643a0f181", 00:13:33.581 "is_configured": true, 00:13:33.581 "data_offset": 0, 00:13:33.581 "data_size": 65536 00:13:33.581 } 00:13:33.581 ] 00:13:33.581 }' 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.581 16:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:34.149 [2024-10-08 16:21:27.260202] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:34.149 "name": "Existed_Raid", 00:13:34.149 "aliases": [ 00:13:34.149 "0aa9106c-0e00-4a6c-9aed-0bab83c441b0" 00:13:34.149 ], 00:13:34.149 "product_name": "Raid Volume", 00:13:34.149 "block_size": 512, 00:13:34.149 "num_blocks": 65536, 00:13:34.149 "uuid": "0aa9106c-0e00-4a6c-9aed-0bab83c441b0", 00:13:34.149 "assigned_rate_limits": { 00:13:34.149 "rw_ios_per_sec": 0, 00:13:34.149 "rw_mbytes_per_sec": 0, 00:13:34.149 "r_mbytes_per_sec": 0, 00:13:34.149 "w_mbytes_per_sec": 0 00:13:34.149 }, 00:13:34.149 "claimed": false, 00:13:34.149 "zoned": false, 00:13:34.149 "supported_io_types": { 00:13:34.149 "read": true, 00:13:34.149 "write": true, 00:13:34.149 "unmap": false, 00:13:34.149 "flush": false, 00:13:34.149 "reset": true, 00:13:34.149 "nvme_admin": false, 00:13:34.149 "nvme_io": false, 00:13:34.149 "nvme_io_md": false, 00:13:34.149 "write_zeroes": true, 00:13:34.149 "zcopy": false, 00:13:34.149 "get_zone_info": false, 00:13:34.149 "zone_management": false, 00:13:34.149 "zone_append": false, 00:13:34.149 "compare": false, 00:13:34.149 "compare_and_write": false, 00:13:34.149 "abort": false, 00:13:34.149 "seek_hole": false, 00:13:34.149 "seek_data": false, 00:13:34.149 "copy": false, 00:13:34.149 "nvme_iov_md": false 00:13:34.149 }, 00:13:34.149 "memory_domains": [ 00:13:34.149 { 00:13:34.149 "dma_device_id": "system", 00:13:34.149 "dma_device_type": 1 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.149 "dma_device_type": 2 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "system", 00:13:34.149 "dma_device_type": 1 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.149 "dma_device_type": 2 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "system", 00:13:34.149 "dma_device_type": 1 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.149 "dma_device_type": 2 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "system", 00:13:34.149 "dma_device_type": 1 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.149 "dma_device_type": 2 00:13:34.149 } 00:13:34.149 ], 00:13:34.149 "driver_specific": { 00:13:34.149 "raid": { 00:13:34.149 "uuid": "0aa9106c-0e00-4a6c-9aed-0bab83c441b0", 00:13:34.149 "strip_size_kb": 0, 00:13:34.149 "state": "online", 00:13:34.149 "raid_level": "raid1", 00:13:34.149 "superblock": false, 00:13:34.149 "num_base_bdevs": 4, 00:13:34.149 "num_base_bdevs_discovered": 4, 00:13:34.149 "num_base_bdevs_operational": 4, 00:13:34.149 "base_bdevs_list": [ 00:13:34.149 { 00:13:34.149 "name": "BaseBdev1", 00:13:34.149 "uuid": "0a48f031-edd5-48a3-b016-853726af7ceb", 00:13:34.149 "is_configured": true, 00:13:34.149 "data_offset": 0, 00:13:34.149 "data_size": 65536 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "name": "BaseBdev2", 00:13:34.149 "uuid": "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5", 00:13:34.149 "is_configured": true, 00:13:34.149 "data_offset": 0, 00:13:34.149 "data_size": 65536 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "name": "BaseBdev3", 00:13:34.149 "uuid": "4d93c363-90c6-46b8-bb5b-6c77780d8942", 00:13:34.149 "is_configured": true, 00:13:34.149 "data_offset": 0, 00:13:34.149 "data_size": 65536 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "name": "BaseBdev4", 00:13:34.149 "uuid": "d35e5f22-2f75-4c3a-8ca7-117643a0f181", 00:13:34.149 "is_configured": true, 00:13:34.149 "data_offset": 0, 00:13:34.149 "data_size": 65536 00:13:34.149 } 00:13:34.149 ] 00:13:34.149 } 00:13:34.149 } 00:13:34.149 }' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:34.149 BaseBdev2 00:13:34.149 BaseBdev3 00:13:34.149 BaseBdev4' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.149 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.407 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.408 [2024-10-08 16:21:27.607975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.408 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.665 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.665 "name": "Existed_Raid", 00:13:34.665 "uuid": "0aa9106c-0e00-4a6c-9aed-0bab83c441b0", 00:13:34.665 "strip_size_kb": 0, 00:13:34.665 "state": "online", 00:13:34.665 "raid_level": "raid1", 00:13:34.665 "superblock": false, 00:13:34.665 "num_base_bdevs": 4, 00:13:34.665 "num_base_bdevs_discovered": 3, 00:13:34.665 "num_base_bdevs_operational": 3, 00:13:34.665 "base_bdevs_list": [ 00:13:34.665 { 00:13:34.665 "name": null, 00:13:34.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.665 "is_configured": false, 00:13:34.665 "data_offset": 0, 00:13:34.665 "data_size": 65536 00:13:34.665 }, 00:13:34.665 { 00:13:34.665 "name": "BaseBdev2", 00:13:34.665 "uuid": "d9e173e6-33fb-4a6d-9a6d-823d1d85b9a5", 00:13:34.665 "is_configured": true, 00:13:34.665 "data_offset": 0, 00:13:34.665 "data_size": 65536 00:13:34.665 }, 00:13:34.665 { 00:13:34.665 "name": "BaseBdev3", 00:13:34.665 "uuid": "4d93c363-90c6-46b8-bb5b-6c77780d8942", 00:13:34.665 "is_configured": true, 00:13:34.665 "data_offset": 0, 00:13:34.665 "data_size": 65536 00:13:34.665 }, 00:13:34.665 { 00:13:34.665 "name": "BaseBdev4", 00:13:34.665 "uuid": "d35e5f22-2f75-4c3a-8ca7-117643a0f181", 00:13:34.665 "is_configured": true, 00:13:34.665 "data_offset": 0, 00:13:34.665 "data_size": 65536 00:13:34.665 } 00:13:34.665 ] 00:13:34.665 }' 00:13:34.665 16:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.665 16:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:34.923 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.185 [2024-10-08 16:21:28.250116] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.185 [2024-10-08 16:21:28.397854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.185 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.445 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.445 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:35.445 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.445 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:35.445 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.445 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.445 [2024-10-08 16:21:28.546890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:35.446 [2024-10-08 16:21:28.547031] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.446 [2024-10-08 16:21:28.635184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.446 [2024-10-08 16:21:28.635301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.446 [2024-10-08 16:21:28.635323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.446 BaseBdev2 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.446 [ 00:13:35.446 { 00:13:35.446 "name": "BaseBdev2", 00:13:35.446 "aliases": [ 00:13:35.446 "323bba8e-8e99-43f2-9bc0-3b2f251d8343" 00:13:35.446 ], 00:13:35.446 "product_name": "Malloc disk", 00:13:35.446 "block_size": 512, 00:13:35.446 "num_blocks": 65536, 00:13:35.446 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:35.446 "assigned_rate_limits": { 00:13:35.446 "rw_ios_per_sec": 0, 00:13:35.446 "rw_mbytes_per_sec": 0, 00:13:35.446 "r_mbytes_per_sec": 0, 00:13:35.446 "w_mbytes_per_sec": 0 00:13:35.446 }, 00:13:35.446 "claimed": false, 00:13:35.446 "zoned": false, 00:13:35.446 "supported_io_types": { 00:13:35.446 "read": true, 00:13:35.446 "write": true, 00:13:35.446 "unmap": true, 00:13:35.446 "flush": true, 00:13:35.446 "reset": true, 00:13:35.446 "nvme_admin": false, 00:13:35.446 "nvme_io": false, 00:13:35.446 "nvme_io_md": false, 00:13:35.446 "write_zeroes": true, 00:13:35.446 "zcopy": true, 00:13:35.446 "get_zone_info": false, 00:13:35.446 "zone_management": false, 00:13:35.446 "zone_append": false, 00:13:35.446 "compare": false, 00:13:35.446 "compare_and_write": false, 00:13:35.446 "abort": true, 00:13:35.446 "seek_hole": false, 00:13:35.446 "seek_data": false, 00:13:35.446 "copy": true, 00:13:35.446 "nvme_iov_md": false 00:13:35.446 }, 00:13:35.446 "memory_domains": [ 00:13:35.446 { 00:13:35.446 "dma_device_id": "system", 00:13:35.446 "dma_device_type": 1 00:13:35.446 }, 00:13:35.446 { 00:13:35.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.446 "dma_device_type": 2 00:13:35.446 } 00:13:35.446 ], 00:13:35.446 "driver_specific": {} 00:13:35.446 } 00:13:35.446 ] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.446 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.715 BaseBdev3 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.715 [ 00:13:35.715 { 00:13:35.715 "name": "BaseBdev3", 00:13:35.715 "aliases": [ 00:13:35.715 "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd" 00:13:35.715 ], 00:13:35.715 "product_name": "Malloc disk", 00:13:35.715 "block_size": 512, 00:13:35.715 "num_blocks": 65536, 00:13:35.715 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:35.715 "assigned_rate_limits": { 00:13:35.715 "rw_ios_per_sec": 0, 00:13:35.715 "rw_mbytes_per_sec": 0, 00:13:35.715 "r_mbytes_per_sec": 0, 00:13:35.715 "w_mbytes_per_sec": 0 00:13:35.715 }, 00:13:35.715 "claimed": false, 00:13:35.715 "zoned": false, 00:13:35.715 "supported_io_types": { 00:13:35.715 "read": true, 00:13:35.715 "write": true, 00:13:35.715 "unmap": true, 00:13:35.715 "flush": true, 00:13:35.715 "reset": true, 00:13:35.715 "nvme_admin": false, 00:13:35.715 "nvme_io": false, 00:13:35.715 "nvme_io_md": false, 00:13:35.715 "write_zeroes": true, 00:13:35.715 "zcopy": true, 00:13:35.715 "get_zone_info": false, 00:13:35.715 "zone_management": false, 00:13:35.715 "zone_append": false, 00:13:35.715 "compare": false, 00:13:35.715 "compare_and_write": false, 00:13:35.715 "abort": true, 00:13:35.715 "seek_hole": false, 00:13:35.715 "seek_data": false, 00:13:35.715 "copy": true, 00:13:35.715 "nvme_iov_md": false 00:13:35.715 }, 00:13:35.715 "memory_domains": [ 00:13:35.715 { 00:13:35.715 "dma_device_id": "system", 00:13:35.715 "dma_device_type": 1 00:13:35.715 }, 00:13:35.715 { 00:13:35.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.715 "dma_device_type": 2 00:13:35.715 } 00:13:35.715 ], 00:13:35.715 "driver_specific": {} 00:13:35.715 } 00:13:35.715 ] 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.715 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.715 BaseBdev4 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.716 [ 00:13:35.716 { 00:13:35.716 "name": "BaseBdev4", 00:13:35.716 "aliases": [ 00:13:35.716 "a5ecdb48-41e9-49cd-b231-23c1435095ce" 00:13:35.716 ], 00:13:35.716 "product_name": "Malloc disk", 00:13:35.716 "block_size": 512, 00:13:35.716 "num_blocks": 65536, 00:13:35.716 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:35.716 "assigned_rate_limits": { 00:13:35.716 "rw_ios_per_sec": 0, 00:13:35.716 "rw_mbytes_per_sec": 0, 00:13:35.716 "r_mbytes_per_sec": 0, 00:13:35.716 "w_mbytes_per_sec": 0 00:13:35.716 }, 00:13:35.716 "claimed": false, 00:13:35.716 "zoned": false, 00:13:35.716 "supported_io_types": { 00:13:35.716 "read": true, 00:13:35.716 "write": true, 00:13:35.716 "unmap": true, 00:13:35.716 "flush": true, 00:13:35.716 "reset": true, 00:13:35.716 "nvme_admin": false, 00:13:35.716 "nvme_io": false, 00:13:35.716 "nvme_io_md": false, 00:13:35.716 "write_zeroes": true, 00:13:35.716 "zcopy": true, 00:13:35.716 "get_zone_info": false, 00:13:35.716 "zone_management": false, 00:13:35.716 "zone_append": false, 00:13:35.716 "compare": false, 00:13:35.716 "compare_and_write": false, 00:13:35.716 "abort": true, 00:13:35.716 "seek_hole": false, 00:13:35.716 "seek_data": false, 00:13:35.716 "copy": true, 00:13:35.716 "nvme_iov_md": false 00:13:35.716 }, 00:13:35.716 "memory_domains": [ 00:13:35.716 { 00:13:35.716 "dma_device_id": "system", 00:13:35.716 "dma_device_type": 1 00:13:35.716 }, 00:13:35.716 { 00:13:35.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.716 "dma_device_type": 2 00:13:35.716 } 00:13:35.716 ], 00:13:35.716 "driver_specific": {} 00:13:35.716 } 00:13:35.716 ] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.716 [2024-10-08 16:21:28.895560] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.716 [2024-10-08 16:21:28.895642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.716 [2024-10-08 16:21:28.895667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.716 [2024-10-08 16:21:28.897975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.716 [2024-10-08 16:21:28.898041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.716 "name": "Existed_Raid", 00:13:35.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.716 "strip_size_kb": 0, 00:13:35.716 "state": "configuring", 00:13:35.716 "raid_level": "raid1", 00:13:35.716 "superblock": false, 00:13:35.716 "num_base_bdevs": 4, 00:13:35.716 "num_base_bdevs_discovered": 3, 00:13:35.716 "num_base_bdevs_operational": 4, 00:13:35.716 "base_bdevs_list": [ 00:13:35.716 { 00:13:35.716 "name": "BaseBdev1", 00:13:35.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.716 "is_configured": false, 00:13:35.716 "data_offset": 0, 00:13:35.716 "data_size": 0 00:13:35.716 }, 00:13:35.716 { 00:13:35.716 "name": "BaseBdev2", 00:13:35.716 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:35.716 "is_configured": true, 00:13:35.716 "data_offset": 0, 00:13:35.716 "data_size": 65536 00:13:35.716 }, 00:13:35.716 { 00:13:35.716 "name": "BaseBdev3", 00:13:35.716 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:35.716 "is_configured": true, 00:13:35.716 "data_offset": 0, 00:13:35.716 "data_size": 65536 00:13:35.716 }, 00:13:35.716 { 00:13:35.716 "name": "BaseBdev4", 00:13:35.716 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:35.716 "is_configured": true, 00:13:35.716 "data_offset": 0, 00:13:35.716 "data_size": 65536 00:13:35.716 } 00:13:35.716 ] 00:13:35.716 }' 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.716 16:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.281 [2024-10-08 16:21:29.435754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.281 "name": "Existed_Raid", 00:13:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.281 "strip_size_kb": 0, 00:13:36.281 "state": "configuring", 00:13:36.281 "raid_level": "raid1", 00:13:36.281 "superblock": false, 00:13:36.281 "num_base_bdevs": 4, 00:13:36.281 "num_base_bdevs_discovered": 2, 00:13:36.281 "num_base_bdevs_operational": 4, 00:13:36.281 "base_bdevs_list": [ 00:13:36.281 { 00:13:36.281 "name": "BaseBdev1", 00:13:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.281 "is_configured": false, 00:13:36.281 "data_offset": 0, 00:13:36.281 "data_size": 0 00:13:36.281 }, 00:13:36.281 { 00:13:36.281 "name": null, 00:13:36.281 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:36.281 "is_configured": false, 00:13:36.281 "data_offset": 0, 00:13:36.281 "data_size": 65536 00:13:36.281 }, 00:13:36.281 { 00:13:36.281 "name": "BaseBdev3", 00:13:36.281 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:36.281 "is_configured": true, 00:13:36.281 "data_offset": 0, 00:13:36.281 "data_size": 65536 00:13:36.281 }, 00:13:36.281 { 00:13:36.281 "name": "BaseBdev4", 00:13:36.281 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:36.281 "is_configured": true, 00:13:36.281 "data_offset": 0, 00:13:36.281 "data_size": 65536 00:13:36.281 } 00:13:36.281 ] 00:13:36.281 }' 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.281 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.849 16:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.849 [2024-10-08 16:21:30.029903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.849 BaseBdev1 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.849 [ 00:13:36.849 { 00:13:36.849 "name": "BaseBdev1", 00:13:36.849 "aliases": [ 00:13:36.849 "2340f363-66aa-45aa-9157-099e8a49519f" 00:13:36.849 ], 00:13:36.849 "product_name": "Malloc disk", 00:13:36.849 "block_size": 512, 00:13:36.849 "num_blocks": 65536, 00:13:36.849 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:36.849 "assigned_rate_limits": { 00:13:36.849 "rw_ios_per_sec": 0, 00:13:36.849 "rw_mbytes_per_sec": 0, 00:13:36.849 "r_mbytes_per_sec": 0, 00:13:36.849 "w_mbytes_per_sec": 0 00:13:36.849 }, 00:13:36.849 "claimed": true, 00:13:36.849 "claim_type": "exclusive_write", 00:13:36.849 "zoned": false, 00:13:36.849 "supported_io_types": { 00:13:36.849 "read": true, 00:13:36.849 "write": true, 00:13:36.849 "unmap": true, 00:13:36.849 "flush": true, 00:13:36.849 "reset": true, 00:13:36.849 "nvme_admin": false, 00:13:36.849 "nvme_io": false, 00:13:36.849 "nvme_io_md": false, 00:13:36.849 "write_zeroes": true, 00:13:36.849 "zcopy": true, 00:13:36.849 "get_zone_info": false, 00:13:36.849 "zone_management": false, 00:13:36.849 "zone_append": false, 00:13:36.849 "compare": false, 00:13:36.849 "compare_and_write": false, 00:13:36.849 "abort": true, 00:13:36.849 "seek_hole": false, 00:13:36.849 "seek_data": false, 00:13:36.849 "copy": true, 00:13:36.849 "nvme_iov_md": false 00:13:36.849 }, 00:13:36.849 "memory_domains": [ 00:13:36.849 { 00:13:36.849 "dma_device_id": "system", 00:13:36.849 "dma_device_type": 1 00:13:36.849 }, 00:13:36.849 { 00:13:36.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.849 "dma_device_type": 2 00:13:36.849 } 00:13:36.849 ], 00:13:36.849 "driver_specific": {} 00:13:36.849 } 00:13:36.849 ] 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.849 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.849 "name": "Existed_Raid", 00:13:36.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.849 "strip_size_kb": 0, 00:13:36.849 "state": "configuring", 00:13:36.849 "raid_level": "raid1", 00:13:36.849 "superblock": false, 00:13:36.849 "num_base_bdevs": 4, 00:13:36.849 "num_base_bdevs_discovered": 3, 00:13:36.849 "num_base_bdevs_operational": 4, 00:13:36.849 "base_bdevs_list": [ 00:13:36.849 { 00:13:36.849 "name": "BaseBdev1", 00:13:36.849 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:36.849 "is_configured": true, 00:13:36.849 "data_offset": 0, 00:13:36.849 "data_size": 65536 00:13:36.849 }, 00:13:36.849 { 00:13:36.849 "name": null, 00:13:36.849 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:36.849 "is_configured": false, 00:13:36.849 "data_offset": 0, 00:13:36.849 "data_size": 65536 00:13:36.849 }, 00:13:36.849 { 00:13:36.849 "name": "BaseBdev3", 00:13:36.849 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:36.849 "is_configured": true, 00:13:36.849 "data_offset": 0, 00:13:36.849 "data_size": 65536 00:13:36.849 }, 00:13:36.849 { 00:13:36.849 "name": "BaseBdev4", 00:13:36.849 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:36.849 "is_configured": true, 00:13:36.849 "data_offset": 0, 00:13:36.849 "data_size": 65536 00:13:36.849 } 00:13:36.849 ] 00:13:36.849 }' 00:13:36.850 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.850 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.415 [2024-10-08 16:21:30.634174] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.415 "name": "Existed_Raid", 00:13:37.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.415 "strip_size_kb": 0, 00:13:37.415 "state": "configuring", 00:13:37.415 "raid_level": "raid1", 00:13:37.415 "superblock": false, 00:13:37.415 "num_base_bdevs": 4, 00:13:37.415 "num_base_bdevs_discovered": 2, 00:13:37.415 "num_base_bdevs_operational": 4, 00:13:37.415 "base_bdevs_list": [ 00:13:37.415 { 00:13:37.415 "name": "BaseBdev1", 00:13:37.415 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:37.415 "is_configured": true, 00:13:37.415 "data_offset": 0, 00:13:37.415 "data_size": 65536 00:13:37.415 }, 00:13:37.415 { 00:13:37.415 "name": null, 00:13:37.415 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:37.415 "is_configured": false, 00:13:37.415 "data_offset": 0, 00:13:37.415 "data_size": 65536 00:13:37.415 }, 00:13:37.415 { 00:13:37.415 "name": null, 00:13:37.415 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:37.415 "is_configured": false, 00:13:37.415 "data_offset": 0, 00:13:37.415 "data_size": 65536 00:13:37.415 }, 00:13:37.415 { 00:13:37.415 "name": "BaseBdev4", 00:13:37.415 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:37.415 "is_configured": true, 00:13:37.415 "data_offset": 0, 00:13:37.415 "data_size": 65536 00:13:37.415 } 00:13:37.415 ] 00:13:37.415 }' 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.415 16:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.046 [2024-10-08 16:21:31.230381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.046 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.047 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.047 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.047 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.047 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.047 "name": "Existed_Raid", 00:13:38.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.047 "strip_size_kb": 0, 00:13:38.047 "state": "configuring", 00:13:38.047 "raid_level": "raid1", 00:13:38.047 "superblock": false, 00:13:38.047 "num_base_bdevs": 4, 00:13:38.047 "num_base_bdevs_discovered": 3, 00:13:38.047 "num_base_bdevs_operational": 4, 00:13:38.047 "base_bdevs_list": [ 00:13:38.047 { 00:13:38.047 "name": "BaseBdev1", 00:13:38.047 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:38.047 "is_configured": true, 00:13:38.047 "data_offset": 0, 00:13:38.047 "data_size": 65536 00:13:38.047 }, 00:13:38.047 { 00:13:38.047 "name": null, 00:13:38.047 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:38.047 "is_configured": false, 00:13:38.047 "data_offset": 0, 00:13:38.047 "data_size": 65536 00:13:38.047 }, 00:13:38.047 { 00:13:38.047 "name": "BaseBdev3", 00:13:38.047 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:38.047 "is_configured": true, 00:13:38.047 "data_offset": 0, 00:13:38.047 "data_size": 65536 00:13:38.047 }, 00:13:38.047 { 00:13:38.047 "name": "BaseBdev4", 00:13:38.047 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:38.047 "is_configured": true, 00:13:38.047 "data_offset": 0, 00:13:38.047 "data_size": 65536 00:13:38.047 } 00:13:38.047 ] 00:13:38.047 }' 00:13:38.047 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.047 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.612 [2024-10-08 16:21:31.786588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.612 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.613 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.613 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.613 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.613 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.613 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.871 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.871 "name": "Existed_Raid", 00:13:38.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.871 "strip_size_kb": 0, 00:13:38.871 "state": "configuring", 00:13:38.871 "raid_level": "raid1", 00:13:38.871 "superblock": false, 00:13:38.871 "num_base_bdevs": 4, 00:13:38.871 "num_base_bdevs_discovered": 2, 00:13:38.871 "num_base_bdevs_operational": 4, 00:13:38.871 "base_bdevs_list": [ 00:13:38.871 { 00:13:38.871 "name": null, 00:13:38.871 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:38.871 "is_configured": false, 00:13:38.871 "data_offset": 0, 00:13:38.871 "data_size": 65536 00:13:38.871 }, 00:13:38.871 { 00:13:38.871 "name": null, 00:13:38.871 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:38.871 "is_configured": false, 00:13:38.871 "data_offset": 0, 00:13:38.871 "data_size": 65536 00:13:38.871 }, 00:13:38.871 { 00:13:38.871 "name": "BaseBdev3", 00:13:38.871 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:38.871 "is_configured": true, 00:13:38.871 "data_offset": 0, 00:13:38.871 "data_size": 65536 00:13:38.871 }, 00:13:38.871 { 00:13:38.871 "name": "BaseBdev4", 00:13:38.871 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:38.871 "is_configured": true, 00:13:38.871 "data_offset": 0, 00:13:38.871 "data_size": 65536 00:13:38.871 } 00:13:38.871 ] 00:13:38.871 }' 00:13:38.871 16:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.871 16:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.129 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.129 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.129 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.129 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.129 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.388 [2024-10-08 16:21:32.457540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.388 "name": "Existed_Raid", 00:13:39.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.388 "strip_size_kb": 0, 00:13:39.388 "state": "configuring", 00:13:39.388 "raid_level": "raid1", 00:13:39.388 "superblock": false, 00:13:39.388 "num_base_bdevs": 4, 00:13:39.388 "num_base_bdevs_discovered": 3, 00:13:39.388 "num_base_bdevs_operational": 4, 00:13:39.388 "base_bdevs_list": [ 00:13:39.388 { 00:13:39.388 "name": null, 00:13:39.388 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:39.388 "is_configured": false, 00:13:39.388 "data_offset": 0, 00:13:39.388 "data_size": 65536 00:13:39.388 }, 00:13:39.388 { 00:13:39.388 "name": "BaseBdev2", 00:13:39.388 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:39.388 "is_configured": true, 00:13:39.388 "data_offset": 0, 00:13:39.388 "data_size": 65536 00:13:39.388 }, 00:13:39.388 { 00:13:39.388 "name": "BaseBdev3", 00:13:39.388 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:39.388 "is_configured": true, 00:13:39.388 "data_offset": 0, 00:13:39.388 "data_size": 65536 00:13:39.388 }, 00:13:39.388 { 00:13:39.388 "name": "BaseBdev4", 00:13:39.388 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:39.388 "is_configured": true, 00:13:39.388 "data_offset": 0, 00:13:39.388 "data_size": 65536 00:13:39.388 } 00:13:39.388 ] 00:13:39.388 }' 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.388 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.963 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.963 16:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.963 16:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2340f363-66aa-45aa-9157-099e8a49519f 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 [2024-10-08 16:21:33.107399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:39.963 [2024-10-08 16:21:33.107463] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:39.963 [2024-10-08 16:21:33.107481] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:39.963 [2024-10-08 16:21:33.108071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:39.963 [2024-10-08 16:21:33.108314] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:39.963 [2024-10-08 16:21:33.108331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:39.963 [2024-10-08 16:21:33.108669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.963 NewBaseBdev 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 [ 00:13:39.963 { 00:13:39.963 "name": "NewBaseBdev", 00:13:39.963 "aliases": [ 00:13:39.963 "2340f363-66aa-45aa-9157-099e8a49519f" 00:13:39.963 ], 00:13:39.963 "product_name": "Malloc disk", 00:13:39.963 "block_size": 512, 00:13:39.963 "num_blocks": 65536, 00:13:39.963 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:39.963 "assigned_rate_limits": { 00:13:39.963 "rw_ios_per_sec": 0, 00:13:39.963 "rw_mbytes_per_sec": 0, 00:13:39.963 "r_mbytes_per_sec": 0, 00:13:39.963 "w_mbytes_per_sec": 0 00:13:39.963 }, 00:13:39.963 "claimed": true, 00:13:39.963 "claim_type": "exclusive_write", 00:13:39.963 "zoned": false, 00:13:39.963 "supported_io_types": { 00:13:39.963 "read": true, 00:13:39.963 "write": true, 00:13:39.963 "unmap": true, 00:13:39.963 "flush": true, 00:13:39.963 "reset": true, 00:13:39.963 "nvme_admin": false, 00:13:39.963 "nvme_io": false, 00:13:39.963 "nvme_io_md": false, 00:13:39.963 "write_zeroes": true, 00:13:39.963 "zcopy": true, 00:13:39.963 "get_zone_info": false, 00:13:39.963 "zone_management": false, 00:13:39.963 "zone_append": false, 00:13:39.963 "compare": false, 00:13:39.963 "compare_and_write": false, 00:13:39.963 "abort": true, 00:13:39.963 "seek_hole": false, 00:13:39.963 "seek_data": false, 00:13:39.963 "copy": true, 00:13:39.963 "nvme_iov_md": false 00:13:39.963 }, 00:13:39.963 "memory_domains": [ 00:13:39.963 { 00:13:39.963 "dma_device_id": "system", 00:13:39.963 "dma_device_type": 1 00:13:39.963 }, 00:13:39.963 { 00:13:39.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.963 "dma_device_type": 2 00:13:39.963 } 00:13:39.963 ], 00:13:39.963 "driver_specific": {} 00:13:39.963 } 00:13:39.963 ] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.963 "name": "Existed_Raid", 00:13:39.963 "uuid": "938a5622-fbed-48fa-8f2c-62edc4adc41c", 00:13:39.963 "strip_size_kb": 0, 00:13:39.963 "state": "online", 00:13:39.963 "raid_level": "raid1", 00:13:39.963 "superblock": false, 00:13:39.963 "num_base_bdevs": 4, 00:13:39.963 "num_base_bdevs_discovered": 4, 00:13:39.963 "num_base_bdevs_operational": 4, 00:13:39.963 "base_bdevs_list": [ 00:13:39.963 { 00:13:39.963 "name": "NewBaseBdev", 00:13:39.963 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:39.963 "is_configured": true, 00:13:39.963 "data_offset": 0, 00:13:39.963 "data_size": 65536 00:13:39.963 }, 00:13:39.963 { 00:13:39.963 "name": "BaseBdev2", 00:13:39.963 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:39.963 "is_configured": true, 00:13:39.963 "data_offset": 0, 00:13:39.963 "data_size": 65536 00:13:39.963 }, 00:13:39.963 { 00:13:39.963 "name": "BaseBdev3", 00:13:39.963 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:39.963 "is_configured": true, 00:13:39.963 "data_offset": 0, 00:13:39.963 "data_size": 65536 00:13:39.963 }, 00:13:39.963 { 00:13:39.963 "name": "BaseBdev4", 00:13:39.963 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:39.963 "is_configured": true, 00:13:39.963 "data_offset": 0, 00:13:39.963 "data_size": 65536 00:13:39.963 } 00:13:39.963 ] 00:13:39.963 }' 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.963 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.531 [2024-10-08 16:21:33.676057] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.531 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:40.531 "name": "Existed_Raid", 00:13:40.531 "aliases": [ 00:13:40.531 "938a5622-fbed-48fa-8f2c-62edc4adc41c" 00:13:40.531 ], 00:13:40.531 "product_name": "Raid Volume", 00:13:40.531 "block_size": 512, 00:13:40.531 "num_blocks": 65536, 00:13:40.531 "uuid": "938a5622-fbed-48fa-8f2c-62edc4adc41c", 00:13:40.531 "assigned_rate_limits": { 00:13:40.532 "rw_ios_per_sec": 0, 00:13:40.532 "rw_mbytes_per_sec": 0, 00:13:40.532 "r_mbytes_per_sec": 0, 00:13:40.532 "w_mbytes_per_sec": 0 00:13:40.532 }, 00:13:40.532 "claimed": false, 00:13:40.532 "zoned": false, 00:13:40.532 "supported_io_types": { 00:13:40.532 "read": true, 00:13:40.532 "write": true, 00:13:40.532 "unmap": false, 00:13:40.532 "flush": false, 00:13:40.532 "reset": true, 00:13:40.532 "nvme_admin": false, 00:13:40.532 "nvme_io": false, 00:13:40.532 "nvme_io_md": false, 00:13:40.532 "write_zeroes": true, 00:13:40.532 "zcopy": false, 00:13:40.532 "get_zone_info": false, 00:13:40.532 "zone_management": false, 00:13:40.532 "zone_append": false, 00:13:40.532 "compare": false, 00:13:40.532 "compare_and_write": false, 00:13:40.532 "abort": false, 00:13:40.532 "seek_hole": false, 00:13:40.532 "seek_data": false, 00:13:40.532 "copy": false, 00:13:40.532 "nvme_iov_md": false 00:13:40.532 }, 00:13:40.532 "memory_domains": [ 00:13:40.532 { 00:13:40.532 "dma_device_id": "system", 00:13:40.532 "dma_device_type": 1 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.532 "dma_device_type": 2 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "system", 00:13:40.532 "dma_device_type": 1 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.532 "dma_device_type": 2 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "system", 00:13:40.532 "dma_device_type": 1 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.532 "dma_device_type": 2 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "system", 00:13:40.532 "dma_device_type": 1 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.532 "dma_device_type": 2 00:13:40.532 } 00:13:40.532 ], 00:13:40.532 "driver_specific": { 00:13:40.532 "raid": { 00:13:40.532 "uuid": "938a5622-fbed-48fa-8f2c-62edc4adc41c", 00:13:40.532 "strip_size_kb": 0, 00:13:40.532 "state": "online", 00:13:40.532 "raid_level": "raid1", 00:13:40.532 "superblock": false, 00:13:40.532 "num_base_bdevs": 4, 00:13:40.532 "num_base_bdevs_discovered": 4, 00:13:40.532 "num_base_bdevs_operational": 4, 00:13:40.532 "base_bdevs_list": [ 00:13:40.532 { 00:13:40.532 "name": "NewBaseBdev", 00:13:40.532 "uuid": "2340f363-66aa-45aa-9157-099e8a49519f", 00:13:40.532 "is_configured": true, 00:13:40.532 "data_offset": 0, 00:13:40.532 "data_size": 65536 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "name": "BaseBdev2", 00:13:40.532 "uuid": "323bba8e-8e99-43f2-9bc0-3b2f251d8343", 00:13:40.532 "is_configured": true, 00:13:40.532 "data_offset": 0, 00:13:40.532 "data_size": 65536 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "name": "BaseBdev3", 00:13:40.532 "uuid": "2ea1077e-dc18-41ef-b6b0-01d5f9c498dd", 00:13:40.532 "is_configured": true, 00:13:40.532 "data_offset": 0, 00:13:40.532 "data_size": 65536 00:13:40.532 }, 00:13:40.532 { 00:13:40.532 "name": "BaseBdev4", 00:13:40.532 "uuid": "a5ecdb48-41e9-49cd-b231-23c1435095ce", 00:13:40.532 "is_configured": true, 00:13:40.532 "data_offset": 0, 00:13:40.532 "data_size": 65536 00:13:40.532 } 00:13:40.532 ] 00:13:40.532 } 00:13:40.532 } 00:13:40.532 }' 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:40.532 BaseBdev2 00:13:40.532 BaseBdev3 00:13:40.532 BaseBdev4' 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.532 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.790 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.791 16:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.791 [2024-10-08 16:21:34.047683] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.791 [2024-10-08 16:21:34.047889] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.791 [2024-10-08 16:21:34.048098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.791 [2024-10-08 16:21:34.048619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.791 [2024-10-08 16:21:34.048761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73664 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73664 ']' 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73664 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73664 00:13:40.791 killing process with pid 73664 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73664' 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73664 00:13:40.791 [2024-10-08 16:21:34.086550] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.791 16:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73664 00:13:41.357 [2024-10-08 16:21:34.446229] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:42.738 00:13:42.738 real 0m13.002s 00:13:42.738 user 0m21.326s 00:13:42.738 sys 0m1.963s 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.738 ************************************ 00:13:42.738 END TEST raid_state_function_test 00:13:42.738 ************************************ 00:13:42.738 16:21:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:42.738 16:21:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:42.738 16:21:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.738 16:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.738 ************************************ 00:13:42.738 START TEST raid_state_function_test_sb 00:13:42.738 ************************************ 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74348 00:13:42.738 Process raid pid: 74348 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74348' 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74348 00:13:42.738 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74348 ']' 00:13:42.739 16:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:42.739 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.739 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.739 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.739 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.739 16:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.739 [2024-10-08 16:21:35.799685] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:42.739 [2024-10-08 16:21:35.800082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.739 [2024-10-08 16:21:35.968248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.997 [2024-10-08 16:21:36.207083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.255 [2024-10-08 16:21:36.410235] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.255 [2024-10-08 16:21:36.410285] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.512 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.512 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:43.512 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:43.512 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.512 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.512 [2024-10-08 16:21:36.798825] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.512 [2024-10-08 16:21:36.798927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.512 [2024-10-08 16:21:36.798943] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.512 [2024-10-08 16:21:36.798962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.512 [2024-10-08 16:21:36.798975] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.512 [2024-10-08 16:21:36.798990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.513 [2024-10-08 16:21:36.798999] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:43.513 [2024-10-08 16:21:36.799013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.513 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.771 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.771 "name": "Existed_Raid", 00:13:43.771 "uuid": "d6d3b47c-6ee5-456c-a163-5dba3a0182d6", 00:13:43.771 "strip_size_kb": 0, 00:13:43.771 "state": "configuring", 00:13:43.771 "raid_level": "raid1", 00:13:43.771 "superblock": true, 00:13:43.771 "num_base_bdevs": 4, 00:13:43.771 "num_base_bdevs_discovered": 0, 00:13:43.771 "num_base_bdevs_operational": 4, 00:13:43.771 "base_bdevs_list": [ 00:13:43.771 { 00:13:43.771 "name": "BaseBdev1", 00:13:43.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.771 "is_configured": false, 00:13:43.771 "data_offset": 0, 00:13:43.771 "data_size": 0 00:13:43.771 }, 00:13:43.771 { 00:13:43.771 "name": "BaseBdev2", 00:13:43.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.771 "is_configured": false, 00:13:43.771 "data_offset": 0, 00:13:43.771 "data_size": 0 00:13:43.771 }, 00:13:43.771 { 00:13:43.771 "name": "BaseBdev3", 00:13:43.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.771 "is_configured": false, 00:13:43.771 "data_offset": 0, 00:13:43.771 "data_size": 0 00:13:43.771 }, 00:13:43.771 { 00:13:43.771 "name": "BaseBdev4", 00:13:43.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.771 "is_configured": false, 00:13:43.771 "data_offset": 0, 00:13:43.771 "data_size": 0 00:13:43.771 } 00:13:43.771 ] 00:13:43.771 }' 00:13:43.771 16:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.771 16:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.030 [2024-10-08 16:21:37.330841] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.030 [2024-10-08 16:21:37.330906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.030 [2024-10-08 16:21:37.338852] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.030 [2024-10-08 16:21:37.338911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.030 [2024-10-08 16:21:37.338926] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.030 [2024-10-08 16:21:37.338943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.030 [2024-10-08 16:21:37.338953] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.030 [2024-10-08 16:21:37.338966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.030 [2024-10-08 16:21:37.338976] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:44.030 [2024-10-08 16:21:37.338990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.030 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.289 [2024-10-08 16:21:37.394715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.289 BaseBdev1 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.289 [ 00:13:44.289 { 00:13:44.289 "name": "BaseBdev1", 00:13:44.289 "aliases": [ 00:13:44.289 "7505ba81-d411-430e-ba48-6080dbd80e6b" 00:13:44.289 ], 00:13:44.289 "product_name": "Malloc disk", 00:13:44.289 "block_size": 512, 00:13:44.289 "num_blocks": 65536, 00:13:44.289 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:44.289 "assigned_rate_limits": { 00:13:44.289 "rw_ios_per_sec": 0, 00:13:44.289 "rw_mbytes_per_sec": 0, 00:13:44.289 "r_mbytes_per_sec": 0, 00:13:44.289 "w_mbytes_per_sec": 0 00:13:44.289 }, 00:13:44.289 "claimed": true, 00:13:44.289 "claim_type": "exclusive_write", 00:13:44.289 "zoned": false, 00:13:44.289 "supported_io_types": { 00:13:44.289 "read": true, 00:13:44.289 "write": true, 00:13:44.289 "unmap": true, 00:13:44.289 "flush": true, 00:13:44.289 "reset": true, 00:13:44.289 "nvme_admin": false, 00:13:44.289 "nvme_io": false, 00:13:44.289 "nvme_io_md": false, 00:13:44.289 "write_zeroes": true, 00:13:44.289 "zcopy": true, 00:13:44.289 "get_zone_info": false, 00:13:44.289 "zone_management": false, 00:13:44.289 "zone_append": false, 00:13:44.289 "compare": false, 00:13:44.289 "compare_and_write": false, 00:13:44.289 "abort": true, 00:13:44.289 "seek_hole": false, 00:13:44.289 "seek_data": false, 00:13:44.289 "copy": true, 00:13:44.289 "nvme_iov_md": false 00:13:44.289 }, 00:13:44.289 "memory_domains": [ 00:13:44.289 { 00:13:44.289 "dma_device_id": "system", 00:13:44.289 "dma_device_type": 1 00:13:44.289 }, 00:13:44.289 { 00:13:44.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.289 "dma_device_type": 2 00:13:44.289 } 00:13:44.289 ], 00:13:44.289 "driver_specific": {} 00:13:44.289 } 00:13:44.289 ] 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.289 "name": "Existed_Raid", 00:13:44.289 "uuid": "bf066311-a799-4f16-a429-aac3b979817a", 00:13:44.289 "strip_size_kb": 0, 00:13:44.289 "state": "configuring", 00:13:44.289 "raid_level": "raid1", 00:13:44.289 "superblock": true, 00:13:44.289 "num_base_bdevs": 4, 00:13:44.289 "num_base_bdevs_discovered": 1, 00:13:44.289 "num_base_bdevs_operational": 4, 00:13:44.289 "base_bdevs_list": [ 00:13:44.289 { 00:13:44.289 "name": "BaseBdev1", 00:13:44.289 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:44.289 "is_configured": true, 00:13:44.289 "data_offset": 2048, 00:13:44.289 "data_size": 63488 00:13:44.289 }, 00:13:44.289 { 00:13:44.289 "name": "BaseBdev2", 00:13:44.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.289 "is_configured": false, 00:13:44.289 "data_offset": 0, 00:13:44.289 "data_size": 0 00:13:44.289 }, 00:13:44.289 { 00:13:44.289 "name": "BaseBdev3", 00:13:44.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.289 "is_configured": false, 00:13:44.289 "data_offset": 0, 00:13:44.289 "data_size": 0 00:13:44.289 }, 00:13:44.289 { 00:13:44.289 "name": "BaseBdev4", 00:13:44.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.289 "is_configured": false, 00:13:44.289 "data_offset": 0, 00:13:44.289 "data_size": 0 00:13:44.289 } 00:13:44.289 ] 00:13:44.289 }' 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.289 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.855 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.855 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.855 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 [2024-10-08 16:21:37.930908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.856 [2024-10-08 16:21:37.930989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 [2024-10-08 16:21:37.938964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.856 [2024-10-08 16:21:37.941482] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.856 [2024-10-08 16:21:37.941554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.856 [2024-10-08 16:21:37.941573] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.856 [2024-10-08 16:21:37.941591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.856 [2024-10-08 16:21:37.941601] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:44.856 [2024-10-08 16:21:37.941615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.856 "name": "Existed_Raid", 00:13:44.856 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:44.856 "strip_size_kb": 0, 00:13:44.856 "state": "configuring", 00:13:44.856 "raid_level": "raid1", 00:13:44.856 "superblock": true, 00:13:44.856 "num_base_bdevs": 4, 00:13:44.856 "num_base_bdevs_discovered": 1, 00:13:44.856 "num_base_bdevs_operational": 4, 00:13:44.856 "base_bdevs_list": [ 00:13:44.856 { 00:13:44.856 "name": "BaseBdev1", 00:13:44.856 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:44.856 "is_configured": true, 00:13:44.856 "data_offset": 2048, 00:13:44.856 "data_size": 63488 00:13:44.856 }, 00:13:44.856 { 00:13:44.856 "name": "BaseBdev2", 00:13:44.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.856 "is_configured": false, 00:13:44.856 "data_offset": 0, 00:13:44.856 "data_size": 0 00:13:44.856 }, 00:13:44.856 { 00:13:44.856 "name": "BaseBdev3", 00:13:44.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.856 "is_configured": false, 00:13:44.856 "data_offset": 0, 00:13:44.856 "data_size": 0 00:13:44.856 }, 00:13:44.856 { 00:13:44.856 "name": "BaseBdev4", 00:13:44.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.856 "is_configured": false, 00:13:44.856 "data_offset": 0, 00:13:44.856 "data_size": 0 00:13:44.856 } 00:13:44.856 ] 00:13:44.856 }' 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.856 16:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.422 [2024-10-08 16:21:38.509815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.422 BaseBdev2 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.422 [ 00:13:45.422 { 00:13:45.422 "name": "BaseBdev2", 00:13:45.422 "aliases": [ 00:13:45.422 "9c4dabc0-a710-403e-9250-c64ec8a75241" 00:13:45.422 ], 00:13:45.422 "product_name": "Malloc disk", 00:13:45.422 "block_size": 512, 00:13:45.422 "num_blocks": 65536, 00:13:45.422 "uuid": "9c4dabc0-a710-403e-9250-c64ec8a75241", 00:13:45.422 "assigned_rate_limits": { 00:13:45.422 "rw_ios_per_sec": 0, 00:13:45.422 "rw_mbytes_per_sec": 0, 00:13:45.422 "r_mbytes_per_sec": 0, 00:13:45.422 "w_mbytes_per_sec": 0 00:13:45.422 }, 00:13:45.422 "claimed": true, 00:13:45.422 "claim_type": "exclusive_write", 00:13:45.422 "zoned": false, 00:13:45.422 "supported_io_types": { 00:13:45.422 "read": true, 00:13:45.422 "write": true, 00:13:45.422 "unmap": true, 00:13:45.422 "flush": true, 00:13:45.422 "reset": true, 00:13:45.422 "nvme_admin": false, 00:13:45.422 "nvme_io": false, 00:13:45.422 "nvme_io_md": false, 00:13:45.422 "write_zeroes": true, 00:13:45.422 "zcopy": true, 00:13:45.422 "get_zone_info": false, 00:13:45.422 "zone_management": false, 00:13:45.422 "zone_append": false, 00:13:45.422 "compare": false, 00:13:45.422 "compare_and_write": false, 00:13:45.422 "abort": true, 00:13:45.422 "seek_hole": false, 00:13:45.422 "seek_data": false, 00:13:45.422 "copy": true, 00:13:45.422 "nvme_iov_md": false 00:13:45.422 }, 00:13:45.422 "memory_domains": [ 00:13:45.422 { 00:13:45.422 "dma_device_id": "system", 00:13:45.422 "dma_device_type": 1 00:13:45.422 }, 00:13:45.422 { 00:13:45.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.422 "dma_device_type": 2 00:13:45.422 } 00:13:45.422 ], 00:13:45.422 "driver_specific": {} 00:13:45.422 } 00:13:45.422 ] 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.422 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.422 "name": "Existed_Raid", 00:13:45.422 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:45.422 "strip_size_kb": 0, 00:13:45.422 "state": "configuring", 00:13:45.422 "raid_level": "raid1", 00:13:45.422 "superblock": true, 00:13:45.422 "num_base_bdevs": 4, 00:13:45.422 "num_base_bdevs_discovered": 2, 00:13:45.422 "num_base_bdevs_operational": 4, 00:13:45.423 "base_bdevs_list": [ 00:13:45.423 { 00:13:45.423 "name": "BaseBdev1", 00:13:45.423 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:45.423 "is_configured": true, 00:13:45.423 "data_offset": 2048, 00:13:45.423 "data_size": 63488 00:13:45.423 }, 00:13:45.423 { 00:13:45.423 "name": "BaseBdev2", 00:13:45.423 "uuid": "9c4dabc0-a710-403e-9250-c64ec8a75241", 00:13:45.423 "is_configured": true, 00:13:45.423 "data_offset": 2048, 00:13:45.423 "data_size": 63488 00:13:45.423 }, 00:13:45.423 { 00:13:45.423 "name": "BaseBdev3", 00:13:45.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.423 "is_configured": false, 00:13:45.423 "data_offset": 0, 00:13:45.423 "data_size": 0 00:13:45.423 }, 00:13:45.423 { 00:13:45.423 "name": "BaseBdev4", 00:13:45.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.423 "is_configured": false, 00:13:45.423 "data_offset": 0, 00:13:45.423 "data_size": 0 00:13:45.423 } 00:13:45.423 ] 00:13:45.423 }' 00:13:45.423 16:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.423 16:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.988 [2024-10-08 16:21:39.112627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.988 BaseBdev3 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.988 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.988 [ 00:13:45.988 { 00:13:45.988 "name": "BaseBdev3", 00:13:45.988 "aliases": [ 00:13:45.988 "746d6e7e-5667-4060-9d75-de6340eabdec" 00:13:45.988 ], 00:13:45.988 "product_name": "Malloc disk", 00:13:45.988 "block_size": 512, 00:13:45.988 "num_blocks": 65536, 00:13:45.988 "uuid": "746d6e7e-5667-4060-9d75-de6340eabdec", 00:13:45.989 "assigned_rate_limits": { 00:13:45.989 "rw_ios_per_sec": 0, 00:13:45.989 "rw_mbytes_per_sec": 0, 00:13:45.989 "r_mbytes_per_sec": 0, 00:13:45.989 "w_mbytes_per_sec": 0 00:13:45.989 }, 00:13:45.989 "claimed": true, 00:13:45.989 "claim_type": "exclusive_write", 00:13:45.989 "zoned": false, 00:13:45.989 "supported_io_types": { 00:13:45.989 "read": true, 00:13:45.989 "write": true, 00:13:45.989 "unmap": true, 00:13:45.989 "flush": true, 00:13:45.989 "reset": true, 00:13:45.989 "nvme_admin": false, 00:13:45.989 "nvme_io": false, 00:13:45.989 "nvme_io_md": false, 00:13:45.989 "write_zeroes": true, 00:13:45.989 "zcopy": true, 00:13:45.989 "get_zone_info": false, 00:13:45.989 "zone_management": false, 00:13:45.989 "zone_append": false, 00:13:45.989 "compare": false, 00:13:45.989 "compare_and_write": false, 00:13:45.989 "abort": true, 00:13:45.989 "seek_hole": false, 00:13:45.989 "seek_data": false, 00:13:45.989 "copy": true, 00:13:45.989 "nvme_iov_md": false 00:13:45.989 }, 00:13:45.989 "memory_domains": [ 00:13:45.989 { 00:13:45.989 "dma_device_id": "system", 00:13:45.989 "dma_device_type": 1 00:13:45.989 }, 00:13:45.989 { 00:13:45.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.989 "dma_device_type": 2 00:13:45.989 } 00:13:45.989 ], 00:13:45.989 "driver_specific": {} 00:13:45.989 } 00:13:45.989 ] 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.989 "name": "Existed_Raid", 00:13:45.989 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:45.989 "strip_size_kb": 0, 00:13:45.989 "state": "configuring", 00:13:45.989 "raid_level": "raid1", 00:13:45.989 "superblock": true, 00:13:45.989 "num_base_bdevs": 4, 00:13:45.989 "num_base_bdevs_discovered": 3, 00:13:45.989 "num_base_bdevs_operational": 4, 00:13:45.989 "base_bdevs_list": [ 00:13:45.989 { 00:13:45.989 "name": "BaseBdev1", 00:13:45.989 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:45.989 "is_configured": true, 00:13:45.989 "data_offset": 2048, 00:13:45.989 "data_size": 63488 00:13:45.989 }, 00:13:45.989 { 00:13:45.989 "name": "BaseBdev2", 00:13:45.989 "uuid": "9c4dabc0-a710-403e-9250-c64ec8a75241", 00:13:45.989 "is_configured": true, 00:13:45.989 "data_offset": 2048, 00:13:45.989 "data_size": 63488 00:13:45.989 }, 00:13:45.989 { 00:13:45.989 "name": "BaseBdev3", 00:13:45.989 "uuid": "746d6e7e-5667-4060-9d75-de6340eabdec", 00:13:45.989 "is_configured": true, 00:13:45.989 "data_offset": 2048, 00:13:45.989 "data_size": 63488 00:13:45.989 }, 00:13:45.989 { 00:13:45.989 "name": "BaseBdev4", 00:13:45.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.989 "is_configured": false, 00:13:45.989 "data_offset": 0, 00:13:45.989 "data_size": 0 00:13:45.989 } 00:13:45.989 ] 00:13:45.989 }' 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.989 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.567 [2024-10-08 16:21:39.703155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.567 [2024-10-08 16:21:39.703507] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:46.567 [2024-10-08 16:21:39.703560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.567 BaseBdev4 00:13:46.567 [2024-10-08 16:21:39.703910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.567 [2024-10-08 16:21:39.704107] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:46.567 [2024-10-08 16:21:39.704389] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.567 id_bdev 0x617000007e80 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:46.567 [2024-10-08 16:21:39.704808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.567 [ 00:13:46.567 { 00:13:46.567 "name": "BaseBdev4", 00:13:46.567 "aliases": [ 00:13:46.567 "30992de1-89b8-4930-bba2-82e24a9fd93f" 00:13:46.567 ], 00:13:46.567 "product_name": "Malloc disk", 00:13:46.567 "block_size": 512, 00:13:46.567 "num_blocks": 65536, 00:13:46.567 "uuid": "30992de1-89b8-4930-bba2-82e24a9fd93f", 00:13:46.567 "assigned_rate_limits": { 00:13:46.567 "rw_ios_per_sec": 0, 00:13:46.567 "rw_mbytes_per_sec": 0, 00:13:46.567 "r_mbytes_per_sec": 0, 00:13:46.567 "w_mbytes_per_sec": 0 00:13:46.567 }, 00:13:46.567 "claimed": true, 00:13:46.567 "claim_type": "exclusive_write", 00:13:46.567 "zoned": false, 00:13:46.567 "supported_io_types": { 00:13:46.567 "read": true, 00:13:46.567 "write": true, 00:13:46.567 "unmap": true, 00:13:46.567 "flush": true, 00:13:46.567 "reset": true, 00:13:46.567 "nvme_admin": false, 00:13:46.567 "nvme_io": false, 00:13:46.567 "nvme_io_md": false, 00:13:46.567 "write_zeroes": true, 00:13:46.567 "zcopy": true, 00:13:46.567 "get_zone_info": false, 00:13:46.567 "zone_management": false, 00:13:46.567 "zone_append": false, 00:13:46.567 "compare": false, 00:13:46.567 "compare_and_write": false, 00:13:46.567 "abort": true, 00:13:46.567 "seek_hole": false, 00:13:46.567 "seek_data": false, 00:13:46.567 "copy": true, 00:13:46.567 "nvme_iov_md": false 00:13:46.567 }, 00:13:46.567 "memory_domains": [ 00:13:46.567 { 00:13:46.567 "dma_device_id": "system", 00:13:46.567 "dma_device_type": 1 00:13:46.567 }, 00:13:46.567 { 00:13:46.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.567 "dma_device_type": 2 00:13:46.567 } 00:13:46.567 ], 00:13:46.567 "driver_specific": {} 00:13:46.567 } 00:13:46.567 ] 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:46.567 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.568 "name": "Existed_Raid", 00:13:46.568 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:46.568 "strip_size_kb": 0, 00:13:46.568 "state": "online", 00:13:46.568 "raid_level": "raid1", 00:13:46.568 "superblock": true, 00:13:46.568 "num_base_bdevs": 4, 00:13:46.568 "num_base_bdevs_discovered": 4, 00:13:46.568 "num_base_bdevs_operational": 4, 00:13:46.568 "base_bdevs_list": [ 00:13:46.568 { 00:13:46.568 "name": "BaseBdev1", 00:13:46.568 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:46.568 "is_configured": true, 00:13:46.568 "data_offset": 2048, 00:13:46.568 "data_size": 63488 00:13:46.568 }, 00:13:46.568 { 00:13:46.568 "name": "BaseBdev2", 00:13:46.568 "uuid": "9c4dabc0-a710-403e-9250-c64ec8a75241", 00:13:46.568 "is_configured": true, 00:13:46.568 "data_offset": 2048, 00:13:46.568 "data_size": 63488 00:13:46.568 }, 00:13:46.568 { 00:13:46.568 "name": "BaseBdev3", 00:13:46.568 "uuid": "746d6e7e-5667-4060-9d75-de6340eabdec", 00:13:46.568 "is_configured": true, 00:13:46.568 "data_offset": 2048, 00:13:46.568 "data_size": 63488 00:13:46.568 }, 00:13:46.568 { 00:13:46.568 "name": "BaseBdev4", 00:13:46.568 "uuid": "30992de1-89b8-4930-bba2-82e24a9fd93f", 00:13:46.568 "is_configured": true, 00:13:46.568 "data_offset": 2048, 00:13:46.568 "data_size": 63488 00:13:46.568 } 00:13:46.568 ] 00:13:46.568 }' 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.568 16:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.134 [2024-10-08 16:21:40.223785] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:47.134 "name": "Existed_Raid", 00:13:47.134 "aliases": [ 00:13:47.134 "9c12b73b-1a5c-41ac-8542-8e7b50f96630" 00:13:47.134 ], 00:13:47.134 "product_name": "Raid Volume", 00:13:47.134 "block_size": 512, 00:13:47.134 "num_blocks": 63488, 00:13:47.134 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:47.134 "assigned_rate_limits": { 00:13:47.134 "rw_ios_per_sec": 0, 00:13:47.134 "rw_mbytes_per_sec": 0, 00:13:47.134 "r_mbytes_per_sec": 0, 00:13:47.134 "w_mbytes_per_sec": 0 00:13:47.134 }, 00:13:47.134 "claimed": false, 00:13:47.134 "zoned": false, 00:13:47.134 "supported_io_types": { 00:13:47.134 "read": true, 00:13:47.134 "write": true, 00:13:47.134 "unmap": false, 00:13:47.134 "flush": false, 00:13:47.134 "reset": true, 00:13:47.134 "nvme_admin": false, 00:13:47.134 "nvme_io": false, 00:13:47.134 "nvme_io_md": false, 00:13:47.134 "write_zeroes": true, 00:13:47.134 "zcopy": false, 00:13:47.134 "get_zone_info": false, 00:13:47.134 "zone_management": false, 00:13:47.134 "zone_append": false, 00:13:47.134 "compare": false, 00:13:47.134 "compare_and_write": false, 00:13:47.134 "abort": false, 00:13:47.134 "seek_hole": false, 00:13:47.134 "seek_data": false, 00:13:47.134 "copy": false, 00:13:47.134 "nvme_iov_md": false 00:13:47.134 }, 00:13:47.134 "memory_domains": [ 00:13:47.134 { 00:13:47.134 "dma_device_id": "system", 00:13:47.134 "dma_device_type": 1 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.134 "dma_device_type": 2 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "system", 00:13:47.134 "dma_device_type": 1 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.134 "dma_device_type": 2 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "system", 00:13:47.134 "dma_device_type": 1 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.134 "dma_device_type": 2 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "system", 00:13:47.134 "dma_device_type": 1 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.134 "dma_device_type": 2 00:13:47.134 } 00:13:47.134 ], 00:13:47.134 "driver_specific": { 00:13:47.134 "raid": { 00:13:47.134 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:47.134 "strip_size_kb": 0, 00:13:47.134 "state": "online", 00:13:47.134 "raid_level": "raid1", 00:13:47.134 "superblock": true, 00:13:47.134 "num_base_bdevs": 4, 00:13:47.134 "num_base_bdevs_discovered": 4, 00:13:47.134 "num_base_bdevs_operational": 4, 00:13:47.134 "base_bdevs_list": [ 00:13:47.134 { 00:13:47.134 "name": "BaseBdev1", 00:13:47.134 "uuid": "7505ba81-d411-430e-ba48-6080dbd80e6b", 00:13:47.134 "is_configured": true, 00:13:47.134 "data_offset": 2048, 00:13:47.134 "data_size": 63488 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "name": "BaseBdev2", 00:13:47.134 "uuid": "9c4dabc0-a710-403e-9250-c64ec8a75241", 00:13:47.134 "is_configured": true, 00:13:47.134 "data_offset": 2048, 00:13:47.134 "data_size": 63488 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "name": "BaseBdev3", 00:13:47.134 "uuid": "746d6e7e-5667-4060-9d75-de6340eabdec", 00:13:47.134 "is_configured": true, 00:13:47.134 "data_offset": 2048, 00:13:47.134 "data_size": 63488 00:13:47.134 }, 00:13:47.134 { 00:13:47.134 "name": "BaseBdev4", 00:13:47.134 "uuid": "30992de1-89b8-4930-bba2-82e24a9fd93f", 00:13:47.134 "is_configured": true, 00:13:47.134 "data_offset": 2048, 00:13:47.134 "data_size": 63488 00:13:47.134 } 00:13:47.134 ] 00:13:47.134 } 00:13:47.134 } 00:13:47.134 }' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:47.134 BaseBdev2 00:13:47.134 BaseBdev3 00:13:47.134 BaseBdev4' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.134 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.135 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.135 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:47.135 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.135 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.135 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.135 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.392 [2024-10-08 16:21:40.587505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.392 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.393 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.651 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.651 "name": "Existed_Raid", 00:13:47.651 "uuid": "9c12b73b-1a5c-41ac-8542-8e7b50f96630", 00:13:47.651 "strip_size_kb": 0, 00:13:47.651 "state": "online", 00:13:47.651 "raid_level": "raid1", 00:13:47.651 "superblock": true, 00:13:47.651 "num_base_bdevs": 4, 00:13:47.651 "num_base_bdevs_discovered": 3, 00:13:47.651 "num_base_bdevs_operational": 3, 00:13:47.651 "base_bdevs_list": [ 00:13:47.651 { 00:13:47.651 "name": null, 00:13:47.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.651 "is_configured": false, 00:13:47.651 "data_offset": 0, 00:13:47.651 "data_size": 63488 00:13:47.651 }, 00:13:47.651 { 00:13:47.651 "name": "BaseBdev2", 00:13:47.651 "uuid": "9c4dabc0-a710-403e-9250-c64ec8a75241", 00:13:47.651 "is_configured": true, 00:13:47.651 "data_offset": 2048, 00:13:47.651 "data_size": 63488 00:13:47.651 }, 00:13:47.651 { 00:13:47.651 "name": "BaseBdev3", 00:13:47.651 "uuid": "746d6e7e-5667-4060-9d75-de6340eabdec", 00:13:47.651 "is_configured": true, 00:13:47.651 "data_offset": 2048, 00:13:47.651 "data_size": 63488 00:13:47.651 }, 00:13:47.651 { 00:13:47.651 "name": "BaseBdev4", 00:13:47.651 "uuid": "30992de1-89b8-4930-bba2-82e24a9fd93f", 00:13:47.651 "is_configured": true, 00:13:47.651 "data_offset": 2048, 00:13:47.651 "data_size": 63488 00:13:47.651 } 00:13:47.651 ] 00:13:47.651 }' 00:13:47.651 16:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.651 16:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.909 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.167 [2024-10-08 16:21:41.241979] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.167 [2024-10-08 16:21:41.389262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.167 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 [2024-10-08 16:21:41.536958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:48.475 [2024-10-08 16:21:41.537354] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.475 [2024-10-08 16:21:41.625594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.475 [2024-10-08 16:21:41.625876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.475 [2024-10-08 16:21:41.626015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 BaseBdev2 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.475 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 [ 00:13:48.475 { 00:13:48.475 "name": "BaseBdev2", 00:13:48.475 "aliases": [ 00:13:48.475 "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6" 00:13:48.475 ], 00:13:48.475 "product_name": "Malloc disk", 00:13:48.475 "block_size": 512, 00:13:48.475 "num_blocks": 65536, 00:13:48.475 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:48.475 "assigned_rate_limits": { 00:13:48.475 "rw_ios_per_sec": 0, 00:13:48.475 "rw_mbytes_per_sec": 0, 00:13:48.475 "r_mbytes_per_sec": 0, 00:13:48.475 "w_mbytes_per_sec": 0 00:13:48.475 }, 00:13:48.475 "claimed": false, 00:13:48.475 "zoned": false, 00:13:48.475 "supported_io_types": { 00:13:48.475 "read": true, 00:13:48.475 "write": true, 00:13:48.475 "unmap": true, 00:13:48.475 "flush": true, 00:13:48.475 "reset": true, 00:13:48.475 "nvme_admin": false, 00:13:48.475 "nvme_io": false, 00:13:48.475 "nvme_io_md": false, 00:13:48.475 "write_zeroes": true, 00:13:48.475 "zcopy": true, 00:13:48.475 "get_zone_info": false, 00:13:48.475 "zone_management": false, 00:13:48.475 "zone_append": false, 00:13:48.475 "compare": false, 00:13:48.475 "compare_and_write": false, 00:13:48.475 "abort": true, 00:13:48.475 "seek_hole": false, 00:13:48.476 "seek_data": false, 00:13:48.476 "copy": true, 00:13:48.476 "nvme_iov_md": false 00:13:48.476 }, 00:13:48.476 "memory_domains": [ 00:13:48.476 { 00:13:48.476 "dma_device_id": "system", 00:13:48.476 "dma_device_type": 1 00:13:48.476 }, 00:13:48.476 { 00:13:48.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.476 "dma_device_type": 2 00:13:48.476 } 00:13:48.476 ], 00:13:48.476 "driver_specific": {} 00:13:48.476 } 00:13:48.476 ] 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.476 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 BaseBdev3 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 [ 00:13:48.734 { 00:13:48.734 "name": "BaseBdev3", 00:13:48.734 "aliases": [ 00:13:48.734 "fa7065e1-bcb0-4eee-9519-ec10d58819ed" 00:13:48.734 ], 00:13:48.734 "product_name": "Malloc disk", 00:13:48.734 "block_size": 512, 00:13:48.734 "num_blocks": 65536, 00:13:48.734 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:48.734 "assigned_rate_limits": { 00:13:48.734 "rw_ios_per_sec": 0, 00:13:48.734 "rw_mbytes_per_sec": 0, 00:13:48.734 "r_mbytes_per_sec": 0, 00:13:48.734 "w_mbytes_per_sec": 0 00:13:48.734 }, 00:13:48.734 "claimed": false, 00:13:48.734 "zoned": false, 00:13:48.734 "supported_io_types": { 00:13:48.734 "read": true, 00:13:48.734 "write": true, 00:13:48.734 "unmap": true, 00:13:48.734 "flush": true, 00:13:48.734 "reset": true, 00:13:48.734 "nvme_admin": false, 00:13:48.734 "nvme_io": false, 00:13:48.734 "nvme_io_md": false, 00:13:48.734 "write_zeroes": true, 00:13:48.734 "zcopy": true, 00:13:48.734 "get_zone_info": false, 00:13:48.734 "zone_management": false, 00:13:48.734 "zone_append": false, 00:13:48.734 "compare": false, 00:13:48.734 "compare_and_write": false, 00:13:48.734 "abort": true, 00:13:48.734 "seek_hole": false, 00:13:48.734 "seek_data": false, 00:13:48.734 "copy": true, 00:13:48.734 "nvme_iov_md": false 00:13:48.734 }, 00:13:48.734 "memory_domains": [ 00:13:48.734 { 00:13:48.734 "dma_device_id": "system", 00:13:48.734 "dma_device_type": 1 00:13:48.734 }, 00:13:48.734 { 00:13:48.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.734 "dma_device_type": 2 00:13:48.734 } 00:13:48.734 ], 00:13:48.734 "driver_specific": {} 00:13:48.734 } 00:13:48.734 ] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 BaseBdev4 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.734 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 [ 00:13:48.734 { 00:13:48.734 "name": "BaseBdev4", 00:13:48.734 "aliases": [ 00:13:48.734 "ead6d094-3b40-47df-b91f-0d9d9ba54578" 00:13:48.734 ], 00:13:48.734 "product_name": "Malloc disk", 00:13:48.734 "block_size": 512, 00:13:48.734 "num_blocks": 65536, 00:13:48.734 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:48.734 "assigned_rate_limits": { 00:13:48.734 "rw_ios_per_sec": 0, 00:13:48.734 "rw_mbytes_per_sec": 0, 00:13:48.734 "r_mbytes_per_sec": 0, 00:13:48.734 "w_mbytes_per_sec": 0 00:13:48.734 }, 00:13:48.734 "claimed": false, 00:13:48.734 "zoned": false, 00:13:48.734 "supported_io_types": { 00:13:48.734 "read": true, 00:13:48.734 "write": true, 00:13:48.734 "unmap": true, 00:13:48.734 "flush": true, 00:13:48.734 "reset": true, 00:13:48.734 "nvme_admin": false, 00:13:48.734 "nvme_io": false, 00:13:48.734 "nvme_io_md": false, 00:13:48.734 "write_zeroes": true, 00:13:48.734 "zcopy": true, 00:13:48.735 "get_zone_info": false, 00:13:48.735 "zone_management": false, 00:13:48.735 "zone_append": false, 00:13:48.735 "compare": false, 00:13:48.735 "compare_and_write": false, 00:13:48.735 "abort": true, 00:13:48.735 "seek_hole": false, 00:13:48.735 "seek_data": false, 00:13:48.735 "copy": true, 00:13:48.735 "nvme_iov_md": false 00:13:48.735 }, 00:13:48.735 "memory_domains": [ 00:13:48.735 { 00:13:48.735 "dma_device_id": "system", 00:13:48.735 "dma_device_type": 1 00:13:48.735 }, 00:13:48.735 { 00:13:48.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.735 "dma_device_type": 2 00:13:48.735 } 00:13:48.735 ], 00:13:48.735 "driver_specific": {} 00:13:48.735 } 00:13:48.735 ] 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.735 [2024-10-08 16:21:41.920113] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.735 [2024-10-08 16:21:41.920419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.735 [2024-10-08 16:21:41.920665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.735 [2024-10-08 16:21:41.923164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.735 [2024-10-08 16:21:41.923235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.735 "name": "Existed_Raid", 00:13:48.735 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:48.735 "strip_size_kb": 0, 00:13:48.735 "state": "configuring", 00:13:48.735 "raid_level": "raid1", 00:13:48.735 "superblock": true, 00:13:48.735 "num_base_bdevs": 4, 00:13:48.735 "num_base_bdevs_discovered": 3, 00:13:48.735 "num_base_bdevs_operational": 4, 00:13:48.735 "base_bdevs_list": [ 00:13:48.735 { 00:13:48.735 "name": "BaseBdev1", 00:13:48.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.735 "is_configured": false, 00:13:48.735 "data_offset": 0, 00:13:48.735 "data_size": 0 00:13:48.735 }, 00:13:48.735 { 00:13:48.735 "name": "BaseBdev2", 00:13:48.735 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:48.735 "is_configured": true, 00:13:48.735 "data_offset": 2048, 00:13:48.735 "data_size": 63488 00:13:48.735 }, 00:13:48.735 { 00:13:48.735 "name": "BaseBdev3", 00:13:48.735 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:48.735 "is_configured": true, 00:13:48.735 "data_offset": 2048, 00:13:48.735 "data_size": 63488 00:13:48.735 }, 00:13:48.735 { 00:13:48.735 "name": "BaseBdev4", 00:13:48.735 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:48.735 "is_configured": true, 00:13:48.735 "data_offset": 2048, 00:13:48.735 "data_size": 63488 00:13:48.735 } 00:13:48.735 ] 00:13:48.735 }' 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.735 16:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.301 [2024-10-08 16:21:42.444255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.301 "name": "Existed_Raid", 00:13:49.301 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:49.301 "strip_size_kb": 0, 00:13:49.301 "state": "configuring", 00:13:49.301 "raid_level": "raid1", 00:13:49.301 "superblock": true, 00:13:49.301 "num_base_bdevs": 4, 00:13:49.301 "num_base_bdevs_discovered": 2, 00:13:49.301 "num_base_bdevs_operational": 4, 00:13:49.301 "base_bdevs_list": [ 00:13:49.301 { 00:13:49.301 "name": "BaseBdev1", 00:13:49.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.301 "is_configured": false, 00:13:49.301 "data_offset": 0, 00:13:49.301 "data_size": 0 00:13:49.301 }, 00:13:49.301 { 00:13:49.301 "name": null, 00:13:49.301 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:49.301 "is_configured": false, 00:13:49.301 "data_offset": 0, 00:13:49.301 "data_size": 63488 00:13:49.301 }, 00:13:49.301 { 00:13:49.301 "name": "BaseBdev3", 00:13:49.301 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:49.301 "is_configured": true, 00:13:49.301 "data_offset": 2048, 00:13:49.301 "data_size": 63488 00:13:49.301 }, 00:13:49.301 { 00:13:49.301 "name": "BaseBdev4", 00:13:49.301 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:49.301 "is_configured": true, 00:13:49.301 "data_offset": 2048, 00:13:49.301 "data_size": 63488 00:13:49.301 } 00:13:49.301 ] 00:13:49.301 }' 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.301 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.866 16:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:49.866 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.866 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 16:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 [2024-10-08 16:21:43.042018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.866 BaseBdev1 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 [ 00:13:49.866 { 00:13:49.866 "name": "BaseBdev1", 00:13:49.866 "aliases": [ 00:13:49.866 "2c75fd98-d348-4fe7-bf83-09b9aeb29981" 00:13:49.866 ], 00:13:49.866 "product_name": "Malloc disk", 00:13:49.866 "block_size": 512, 00:13:49.866 "num_blocks": 65536, 00:13:49.866 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:49.866 "assigned_rate_limits": { 00:13:49.866 "rw_ios_per_sec": 0, 00:13:49.866 "rw_mbytes_per_sec": 0, 00:13:49.866 "r_mbytes_per_sec": 0, 00:13:49.866 "w_mbytes_per_sec": 0 00:13:49.866 }, 00:13:49.866 "claimed": true, 00:13:49.866 "claim_type": "exclusive_write", 00:13:49.866 "zoned": false, 00:13:49.866 "supported_io_types": { 00:13:49.866 "read": true, 00:13:49.866 "write": true, 00:13:49.866 "unmap": true, 00:13:49.866 "flush": true, 00:13:49.866 "reset": true, 00:13:49.866 "nvme_admin": false, 00:13:49.866 "nvme_io": false, 00:13:49.866 "nvme_io_md": false, 00:13:49.866 "write_zeroes": true, 00:13:49.866 "zcopy": true, 00:13:49.866 "get_zone_info": false, 00:13:49.866 "zone_management": false, 00:13:49.866 "zone_append": false, 00:13:49.866 "compare": false, 00:13:49.866 "compare_and_write": false, 00:13:49.866 "abort": true, 00:13:49.866 "seek_hole": false, 00:13:49.866 "seek_data": false, 00:13:49.866 "copy": true, 00:13:49.866 "nvme_iov_md": false 00:13:49.866 }, 00:13:49.866 "memory_domains": [ 00:13:49.866 { 00:13:49.866 "dma_device_id": "system", 00:13:49.866 "dma_device_type": 1 00:13:49.866 }, 00:13:49.866 { 00:13:49.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.866 "dma_device_type": 2 00:13:49.866 } 00:13:49.866 ], 00:13:49.866 "driver_specific": {} 00:13:49.866 } 00:13:49.866 ] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.866 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.867 "name": "Existed_Raid", 00:13:49.867 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:49.867 "strip_size_kb": 0, 00:13:49.867 "state": "configuring", 00:13:49.867 "raid_level": "raid1", 00:13:49.867 "superblock": true, 00:13:49.867 "num_base_bdevs": 4, 00:13:49.867 "num_base_bdevs_discovered": 3, 00:13:49.867 "num_base_bdevs_operational": 4, 00:13:49.867 "base_bdevs_list": [ 00:13:49.867 { 00:13:49.867 "name": "BaseBdev1", 00:13:49.867 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:49.867 "is_configured": true, 00:13:49.867 "data_offset": 2048, 00:13:49.867 "data_size": 63488 00:13:49.867 }, 00:13:49.867 { 00:13:49.867 "name": null, 00:13:49.867 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:49.867 "is_configured": false, 00:13:49.867 "data_offset": 0, 00:13:49.867 "data_size": 63488 00:13:49.867 }, 00:13:49.867 { 00:13:49.867 "name": "BaseBdev3", 00:13:49.867 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:49.867 "is_configured": true, 00:13:49.867 "data_offset": 2048, 00:13:49.867 "data_size": 63488 00:13:49.867 }, 00:13:49.867 { 00:13:49.867 "name": "BaseBdev4", 00:13:49.867 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:49.867 "is_configured": true, 00:13:49.867 "data_offset": 2048, 00:13:49.867 "data_size": 63488 00:13:49.867 } 00:13:49.867 ] 00:13:49.867 }' 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.867 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.433 [2024-10-08 16:21:43.670342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.433 "name": "Existed_Raid", 00:13:50.433 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:50.433 "strip_size_kb": 0, 00:13:50.433 "state": "configuring", 00:13:50.433 "raid_level": "raid1", 00:13:50.433 "superblock": true, 00:13:50.433 "num_base_bdevs": 4, 00:13:50.433 "num_base_bdevs_discovered": 2, 00:13:50.433 "num_base_bdevs_operational": 4, 00:13:50.433 "base_bdevs_list": [ 00:13:50.433 { 00:13:50.433 "name": "BaseBdev1", 00:13:50.433 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:50.433 "is_configured": true, 00:13:50.433 "data_offset": 2048, 00:13:50.433 "data_size": 63488 00:13:50.433 }, 00:13:50.433 { 00:13:50.433 "name": null, 00:13:50.433 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:50.433 "is_configured": false, 00:13:50.433 "data_offset": 0, 00:13:50.433 "data_size": 63488 00:13:50.433 }, 00:13:50.433 { 00:13:50.433 "name": null, 00:13:50.433 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:50.433 "is_configured": false, 00:13:50.433 "data_offset": 0, 00:13:50.433 "data_size": 63488 00:13:50.433 }, 00:13:50.433 { 00:13:50.433 "name": "BaseBdev4", 00:13:50.433 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:50.433 "is_configured": true, 00:13:50.433 "data_offset": 2048, 00:13:50.433 "data_size": 63488 00:13:50.433 } 00:13:50.433 ] 00:13:50.433 }' 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.433 16:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.000 [2024-10-08 16:21:44.234494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.000 "name": "Existed_Raid", 00:13:51.000 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:51.000 "strip_size_kb": 0, 00:13:51.000 "state": "configuring", 00:13:51.000 "raid_level": "raid1", 00:13:51.000 "superblock": true, 00:13:51.000 "num_base_bdevs": 4, 00:13:51.000 "num_base_bdevs_discovered": 3, 00:13:51.000 "num_base_bdevs_operational": 4, 00:13:51.000 "base_bdevs_list": [ 00:13:51.000 { 00:13:51.000 "name": "BaseBdev1", 00:13:51.000 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:51.000 "is_configured": true, 00:13:51.000 "data_offset": 2048, 00:13:51.000 "data_size": 63488 00:13:51.000 }, 00:13:51.000 { 00:13:51.000 "name": null, 00:13:51.000 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:51.000 "is_configured": false, 00:13:51.000 "data_offset": 0, 00:13:51.000 "data_size": 63488 00:13:51.000 }, 00:13:51.000 { 00:13:51.000 "name": "BaseBdev3", 00:13:51.000 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:51.000 "is_configured": true, 00:13:51.000 "data_offset": 2048, 00:13:51.000 "data_size": 63488 00:13:51.000 }, 00:13:51.000 { 00:13:51.000 "name": "BaseBdev4", 00:13:51.000 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:51.000 "is_configured": true, 00:13:51.000 "data_offset": 2048, 00:13:51.000 "data_size": 63488 00:13:51.000 } 00:13:51.000 ] 00:13:51.000 }' 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.000 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.566 [2024-10-08 16:21:44.786704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.566 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.824 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.824 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.824 "name": "Existed_Raid", 00:13:51.824 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:51.824 "strip_size_kb": 0, 00:13:51.824 "state": "configuring", 00:13:51.824 "raid_level": "raid1", 00:13:51.824 "superblock": true, 00:13:51.824 "num_base_bdevs": 4, 00:13:51.824 "num_base_bdevs_discovered": 2, 00:13:51.824 "num_base_bdevs_operational": 4, 00:13:51.824 "base_bdevs_list": [ 00:13:51.824 { 00:13:51.824 "name": null, 00:13:51.824 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:51.824 "is_configured": false, 00:13:51.824 "data_offset": 0, 00:13:51.824 "data_size": 63488 00:13:51.824 }, 00:13:51.824 { 00:13:51.824 "name": null, 00:13:51.824 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:51.824 "is_configured": false, 00:13:51.824 "data_offset": 0, 00:13:51.824 "data_size": 63488 00:13:51.824 }, 00:13:51.824 { 00:13:51.824 "name": "BaseBdev3", 00:13:51.824 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:51.824 "is_configured": true, 00:13:51.824 "data_offset": 2048, 00:13:51.824 "data_size": 63488 00:13:51.824 }, 00:13:51.824 { 00:13:51.824 "name": "BaseBdev4", 00:13:51.824 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:51.824 "is_configured": true, 00:13:51.824 "data_offset": 2048, 00:13:51.824 "data_size": 63488 00:13:51.824 } 00:13:51.824 ] 00:13:51.824 }' 00:13:51.824 16:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.824 16:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.389 [2024-10-08 16:21:45.463025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.389 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.389 "name": "Existed_Raid", 00:13:52.389 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:52.389 "strip_size_kb": 0, 00:13:52.389 "state": "configuring", 00:13:52.389 "raid_level": "raid1", 00:13:52.389 "superblock": true, 00:13:52.389 "num_base_bdevs": 4, 00:13:52.389 "num_base_bdevs_discovered": 3, 00:13:52.389 "num_base_bdevs_operational": 4, 00:13:52.389 "base_bdevs_list": [ 00:13:52.389 { 00:13:52.389 "name": null, 00:13:52.389 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:52.389 "is_configured": false, 00:13:52.389 "data_offset": 0, 00:13:52.389 "data_size": 63488 00:13:52.389 }, 00:13:52.389 { 00:13:52.389 "name": "BaseBdev2", 00:13:52.389 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:52.389 "is_configured": true, 00:13:52.389 "data_offset": 2048, 00:13:52.389 "data_size": 63488 00:13:52.389 }, 00:13:52.389 { 00:13:52.389 "name": "BaseBdev3", 00:13:52.389 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:52.389 "is_configured": true, 00:13:52.390 "data_offset": 2048, 00:13:52.390 "data_size": 63488 00:13:52.390 }, 00:13:52.390 { 00:13:52.390 "name": "BaseBdev4", 00:13:52.390 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:52.390 "is_configured": true, 00:13:52.390 "data_offset": 2048, 00:13:52.390 "data_size": 63488 00:13:52.390 } 00:13:52.390 ] 00:13:52.390 }' 00:13:52.390 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.390 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.648 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.648 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:52.648 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.648 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.906 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c75fd98-d348-4fe7-bf83-09b9aeb29981 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.906 [2024-10-08 16:21:46.105014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:52.906 [2024-10-08 16:21:46.105319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:52.906 [2024-10-08 16:21:46.105352] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.906 [2024-10-08 16:21:46.105705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:52.906 NewBaseBdev 00:13:52.906 [2024-10-08 16:21:46.105905] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:52.906 [2024-10-08 16:21:46.105922] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:52.906 [2024-10-08 16:21:46.106088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.906 [ 00:13:52.906 { 00:13:52.906 "name": "NewBaseBdev", 00:13:52.906 "aliases": [ 00:13:52.906 "2c75fd98-d348-4fe7-bf83-09b9aeb29981" 00:13:52.906 ], 00:13:52.906 "product_name": "Malloc disk", 00:13:52.906 "block_size": 512, 00:13:52.906 "num_blocks": 65536, 00:13:52.906 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:52.906 "assigned_rate_limits": { 00:13:52.906 "rw_ios_per_sec": 0, 00:13:52.906 "rw_mbytes_per_sec": 0, 00:13:52.906 "r_mbytes_per_sec": 0, 00:13:52.906 "w_mbytes_per_sec": 0 00:13:52.906 }, 00:13:52.906 "claimed": true, 00:13:52.906 "claim_type": "exclusive_write", 00:13:52.906 "zoned": false, 00:13:52.906 "supported_io_types": { 00:13:52.906 "read": true, 00:13:52.906 "write": true, 00:13:52.906 "unmap": true, 00:13:52.906 "flush": true, 00:13:52.906 "reset": true, 00:13:52.906 "nvme_admin": false, 00:13:52.906 "nvme_io": false, 00:13:52.906 "nvme_io_md": false, 00:13:52.906 "write_zeroes": true, 00:13:52.906 "zcopy": true, 00:13:52.906 "get_zone_info": false, 00:13:52.906 "zone_management": false, 00:13:52.906 "zone_append": false, 00:13:52.906 "compare": false, 00:13:52.906 "compare_and_write": false, 00:13:52.906 "abort": true, 00:13:52.906 "seek_hole": false, 00:13:52.906 "seek_data": false, 00:13:52.906 "copy": true, 00:13:52.906 "nvme_iov_md": false 00:13:52.906 }, 00:13:52.906 "memory_domains": [ 00:13:52.906 { 00:13:52.906 "dma_device_id": "system", 00:13:52.906 "dma_device_type": 1 00:13:52.906 }, 00:13:52.906 { 00:13:52.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.906 "dma_device_type": 2 00:13:52.906 } 00:13:52.906 ], 00:13:52.906 "driver_specific": {} 00:13:52.906 } 00:13:52.906 ] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:52.906 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.907 "name": "Existed_Raid", 00:13:52.907 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:52.907 "strip_size_kb": 0, 00:13:52.907 "state": "online", 00:13:52.907 "raid_level": "raid1", 00:13:52.907 "superblock": true, 00:13:52.907 "num_base_bdevs": 4, 00:13:52.907 "num_base_bdevs_discovered": 4, 00:13:52.907 "num_base_bdevs_operational": 4, 00:13:52.907 "base_bdevs_list": [ 00:13:52.907 { 00:13:52.907 "name": "NewBaseBdev", 00:13:52.907 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:52.907 "is_configured": true, 00:13:52.907 "data_offset": 2048, 00:13:52.907 "data_size": 63488 00:13:52.907 }, 00:13:52.907 { 00:13:52.907 "name": "BaseBdev2", 00:13:52.907 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:52.907 "is_configured": true, 00:13:52.907 "data_offset": 2048, 00:13:52.907 "data_size": 63488 00:13:52.907 }, 00:13:52.907 { 00:13:52.907 "name": "BaseBdev3", 00:13:52.907 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:52.907 "is_configured": true, 00:13:52.907 "data_offset": 2048, 00:13:52.907 "data_size": 63488 00:13:52.907 }, 00:13:52.907 { 00:13:52.907 "name": "BaseBdev4", 00:13:52.907 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:52.907 "is_configured": true, 00:13:52.907 "data_offset": 2048, 00:13:52.907 "data_size": 63488 00:13:52.907 } 00:13:52.907 ] 00:13:52.907 }' 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.907 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.473 [2024-10-08 16:21:46.649684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.473 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.473 "name": "Existed_Raid", 00:13:53.473 "aliases": [ 00:13:53.473 "125f34de-2223-4375-8ac5-75f548554cb2" 00:13:53.473 ], 00:13:53.473 "product_name": "Raid Volume", 00:13:53.473 "block_size": 512, 00:13:53.473 "num_blocks": 63488, 00:13:53.473 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:53.473 "assigned_rate_limits": { 00:13:53.473 "rw_ios_per_sec": 0, 00:13:53.473 "rw_mbytes_per_sec": 0, 00:13:53.473 "r_mbytes_per_sec": 0, 00:13:53.473 "w_mbytes_per_sec": 0 00:13:53.473 }, 00:13:53.474 "claimed": false, 00:13:53.474 "zoned": false, 00:13:53.474 "supported_io_types": { 00:13:53.474 "read": true, 00:13:53.474 "write": true, 00:13:53.474 "unmap": false, 00:13:53.474 "flush": false, 00:13:53.474 "reset": true, 00:13:53.474 "nvme_admin": false, 00:13:53.474 "nvme_io": false, 00:13:53.474 "nvme_io_md": false, 00:13:53.474 "write_zeroes": true, 00:13:53.474 "zcopy": false, 00:13:53.474 "get_zone_info": false, 00:13:53.474 "zone_management": false, 00:13:53.474 "zone_append": false, 00:13:53.474 "compare": false, 00:13:53.474 "compare_and_write": false, 00:13:53.474 "abort": false, 00:13:53.474 "seek_hole": false, 00:13:53.474 "seek_data": false, 00:13:53.474 "copy": false, 00:13:53.474 "nvme_iov_md": false 00:13:53.474 }, 00:13:53.474 "memory_domains": [ 00:13:53.474 { 00:13:53.474 "dma_device_id": "system", 00:13:53.474 "dma_device_type": 1 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.474 "dma_device_type": 2 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "system", 00:13:53.474 "dma_device_type": 1 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.474 "dma_device_type": 2 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "system", 00:13:53.474 "dma_device_type": 1 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.474 "dma_device_type": 2 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "system", 00:13:53.474 "dma_device_type": 1 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.474 "dma_device_type": 2 00:13:53.474 } 00:13:53.474 ], 00:13:53.474 "driver_specific": { 00:13:53.474 "raid": { 00:13:53.474 "uuid": "125f34de-2223-4375-8ac5-75f548554cb2", 00:13:53.474 "strip_size_kb": 0, 00:13:53.474 "state": "online", 00:13:53.474 "raid_level": "raid1", 00:13:53.474 "superblock": true, 00:13:53.474 "num_base_bdevs": 4, 00:13:53.474 "num_base_bdevs_discovered": 4, 00:13:53.474 "num_base_bdevs_operational": 4, 00:13:53.474 "base_bdevs_list": [ 00:13:53.474 { 00:13:53.474 "name": "NewBaseBdev", 00:13:53.474 "uuid": "2c75fd98-d348-4fe7-bf83-09b9aeb29981", 00:13:53.474 "is_configured": true, 00:13:53.474 "data_offset": 2048, 00:13:53.474 "data_size": 63488 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "name": "BaseBdev2", 00:13:53.474 "uuid": "1a7e8124-83e1-47fc-b48f-6a4885cbe0f6", 00:13:53.474 "is_configured": true, 00:13:53.474 "data_offset": 2048, 00:13:53.474 "data_size": 63488 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "name": "BaseBdev3", 00:13:53.474 "uuid": "fa7065e1-bcb0-4eee-9519-ec10d58819ed", 00:13:53.474 "is_configured": true, 00:13:53.474 "data_offset": 2048, 00:13:53.474 "data_size": 63488 00:13:53.474 }, 00:13:53.474 { 00:13:53.474 "name": "BaseBdev4", 00:13:53.474 "uuid": "ead6d094-3b40-47df-b91f-0d9d9ba54578", 00:13:53.474 "is_configured": true, 00:13:53.474 "data_offset": 2048, 00:13:53.474 "data_size": 63488 00:13:53.474 } 00:13:53.474 ] 00:13:53.474 } 00:13:53.474 } 00:13:53.474 }' 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:53.474 BaseBdev2 00:13:53.474 BaseBdev3 00:13:53.474 BaseBdev4' 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.474 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.732 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.733 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.733 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.733 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.733 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.733 [2024-10-08 16:21:47.005299] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.733 [2024-10-08 16:21:47.005355] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.733 [2024-10-08 16:21:47.005457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.733 [2024-10-08 16:21:47.005835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.733 [2024-10-08 16:21:47.005871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74348 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74348 ']' 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74348 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74348 00:13:53.733 killing process with pid 74348 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74348' 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74348 00:13:53.733 [2024-10-08 16:21:47.045397] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.733 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74348 00:13:54.298 [2024-10-08 16:21:47.397838] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.710 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:55.710 00:13:55.710 real 0m12.909s 00:13:55.710 user 0m21.255s 00:13:55.710 sys 0m1.822s 00:13:55.710 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.710 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.710 ************************************ 00:13:55.710 END TEST raid_state_function_test_sb 00:13:55.710 ************************************ 00:13:55.710 16:21:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:55.710 16:21:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:55.710 16:21:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.710 16:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.710 ************************************ 00:13:55.710 START TEST raid_superblock_test 00:13:55.710 ************************************ 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75024 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75024 00:13:55.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75024 ']' 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.710 16:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.710 [2024-10-08 16:21:48.779117] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:55.710 [2024-10-08 16:21:48.779313] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75024 ] 00:13:55.710 [2024-10-08 16:21:48.953035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.968 [2024-10-08 16:21:49.191567] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.226 [2024-10-08 16:21:49.393037] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.226 [2024-10-08 16:21:49.393114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.483 malloc1 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.483 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.483 [2024-10-08 16:21:49.789710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:56.483 [2024-10-08 16:21:49.790021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.484 [2024-10-08 16:21:49.790105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:56.484 [2024-10-08 16:21:49.790323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.484 [2024-10-08 16:21:49.793124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.484 [2024-10-08 16:21:49.793294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:56.484 pt1 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.484 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.742 malloc2 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.742 [2024-10-08 16:21:49.858916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:56.742 [2024-10-08 16:21:49.859008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.742 [2024-10-08 16:21:49.859043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:56.742 [2024-10-08 16:21:49.859059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.742 [2024-10-08 16:21:49.861859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.742 [2024-10-08 16:21:49.861906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:56.742 pt2 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.742 malloc3 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.742 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.742 [2024-10-08 16:21:49.914925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:56.742 [2024-10-08 16:21:49.915211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.742 [2024-10-08 16:21:49.915265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:56.743 [2024-10-08 16:21:49.915282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.743 [2024-10-08 16:21:49.918053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.743 [2024-10-08 16:21:49.918099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:56.743 pt3 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.743 malloc4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.743 [2024-10-08 16:21:49.970821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:56.743 [2024-10-08 16:21:49.970900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.743 [2024-10-08 16:21:49.970930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:56.743 [2024-10-08 16:21:49.970946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.743 [2024-10-08 16:21:49.973689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.743 [2024-10-08 16:21:49.973733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:56.743 pt4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.743 [2024-10-08 16:21:49.982908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:56.743 [2024-10-08 16:21:49.985311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.743 [2024-10-08 16:21:49.985403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:56.743 [2024-10-08 16:21:49.985470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:56.743 [2024-10-08 16:21:49.985761] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:56.743 [2024-10-08 16:21:49.985781] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.743 [2024-10-08 16:21:49.986150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:56.743 [2024-10-08 16:21:49.986399] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:56.743 [2024-10-08 16:21:49.986425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:56.743 [2024-10-08 16:21:49.986644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.743 16:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.743 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.743 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.743 "name": "raid_bdev1", 00:13:56.743 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:56.743 "strip_size_kb": 0, 00:13:56.743 "state": "online", 00:13:56.743 "raid_level": "raid1", 00:13:56.743 "superblock": true, 00:13:56.743 "num_base_bdevs": 4, 00:13:56.743 "num_base_bdevs_discovered": 4, 00:13:56.743 "num_base_bdevs_operational": 4, 00:13:56.743 "base_bdevs_list": [ 00:13:56.743 { 00:13:56.743 "name": "pt1", 00:13:56.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:56.743 "is_configured": true, 00:13:56.743 "data_offset": 2048, 00:13:56.743 "data_size": 63488 00:13:56.743 }, 00:13:56.743 { 00:13:56.743 "name": "pt2", 00:13:56.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.743 "is_configured": true, 00:13:56.743 "data_offset": 2048, 00:13:56.743 "data_size": 63488 00:13:56.743 }, 00:13:56.743 { 00:13:56.743 "name": "pt3", 00:13:56.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.743 "is_configured": true, 00:13:56.743 "data_offset": 2048, 00:13:56.743 "data_size": 63488 00:13:56.743 }, 00:13:56.743 { 00:13:56.743 "name": "pt4", 00:13:56.743 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:56.743 "is_configured": true, 00:13:56.743 "data_offset": 2048, 00:13:56.743 "data_size": 63488 00:13:56.743 } 00:13:56.743 ] 00:13:56.743 }' 00:13:56.743 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.743 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.308 [2024-10-08 16:21:50.483393] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.308 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.308 "name": "raid_bdev1", 00:13:57.308 "aliases": [ 00:13:57.308 "b3c993ec-a50f-4556-a06d-8fe3d7ed2848" 00:13:57.308 ], 00:13:57.308 "product_name": "Raid Volume", 00:13:57.308 "block_size": 512, 00:13:57.308 "num_blocks": 63488, 00:13:57.308 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:57.308 "assigned_rate_limits": { 00:13:57.308 "rw_ios_per_sec": 0, 00:13:57.308 "rw_mbytes_per_sec": 0, 00:13:57.308 "r_mbytes_per_sec": 0, 00:13:57.308 "w_mbytes_per_sec": 0 00:13:57.308 }, 00:13:57.308 "claimed": false, 00:13:57.308 "zoned": false, 00:13:57.308 "supported_io_types": { 00:13:57.308 "read": true, 00:13:57.308 "write": true, 00:13:57.308 "unmap": false, 00:13:57.308 "flush": false, 00:13:57.308 "reset": true, 00:13:57.308 "nvme_admin": false, 00:13:57.308 "nvme_io": false, 00:13:57.308 "nvme_io_md": false, 00:13:57.308 "write_zeroes": true, 00:13:57.308 "zcopy": false, 00:13:57.308 "get_zone_info": false, 00:13:57.308 "zone_management": false, 00:13:57.308 "zone_append": false, 00:13:57.308 "compare": false, 00:13:57.309 "compare_and_write": false, 00:13:57.309 "abort": false, 00:13:57.309 "seek_hole": false, 00:13:57.309 "seek_data": false, 00:13:57.309 "copy": false, 00:13:57.309 "nvme_iov_md": false 00:13:57.309 }, 00:13:57.309 "memory_domains": [ 00:13:57.309 { 00:13:57.309 "dma_device_id": "system", 00:13:57.309 "dma_device_type": 1 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.309 "dma_device_type": 2 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "system", 00:13:57.309 "dma_device_type": 1 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.309 "dma_device_type": 2 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "system", 00:13:57.309 "dma_device_type": 1 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.309 "dma_device_type": 2 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "system", 00:13:57.309 "dma_device_type": 1 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.309 "dma_device_type": 2 00:13:57.309 } 00:13:57.309 ], 00:13:57.309 "driver_specific": { 00:13:57.309 "raid": { 00:13:57.309 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:57.309 "strip_size_kb": 0, 00:13:57.309 "state": "online", 00:13:57.309 "raid_level": "raid1", 00:13:57.309 "superblock": true, 00:13:57.309 "num_base_bdevs": 4, 00:13:57.309 "num_base_bdevs_discovered": 4, 00:13:57.309 "num_base_bdevs_operational": 4, 00:13:57.309 "base_bdevs_list": [ 00:13:57.309 { 00:13:57.309 "name": "pt1", 00:13:57.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:57.309 "is_configured": true, 00:13:57.309 "data_offset": 2048, 00:13:57.309 "data_size": 63488 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "name": "pt2", 00:13:57.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.309 "is_configured": true, 00:13:57.309 "data_offset": 2048, 00:13:57.309 "data_size": 63488 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "name": "pt3", 00:13:57.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.309 "is_configured": true, 00:13:57.309 "data_offset": 2048, 00:13:57.309 "data_size": 63488 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "name": "pt4", 00:13:57.309 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:57.309 "is_configured": true, 00:13:57.309 "data_offset": 2048, 00:13:57.309 "data_size": 63488 00:13:57.309 } 00:13:57.309 ] 00:13:57.309 } 00:13:57.309 } 00:13:57.309 }' 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:57.309 pt2 00:13:57.309 pt3 00:13:57.309 pt4' 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.309 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:57.567 [2024-10-08 16:21:50.843453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3c993ec-a50f-4556-a06d-8fe3d7ed2848 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3c993ec-a50f-4556-a06d-8fe3d7ed2848 ']' 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.567 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.567 [2024-10-08 16:21:50.887064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.826 [2024-10-08 16:21:50.887271] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.826 [2024-10-08 16:21:50.887390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.826 [2024-10-08 16:21:50.887560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.826 [2024-10-08 16:21:50.887590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.826 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.826 [2024-10-08 16:21:51.063095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:57.826 [2024-10-08 16:21:51.065651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:57.826 [2024-10-08 16:21:51.065929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:57.826 [2024-10-08 16:21:51.065999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:57.826 [2024-10-08 16:21:51.066069] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:57.826 [2024-10-08 16:21:51.066143] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:57.827 [2024-10-08 16:21:51.066176] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:57.827 [2024-10-08 16:21:51.066205] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:57.827 [2024-10-08 16:21:51.066227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.827 [2024-10-08 16:21:51.066243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:57.827 request: 00:13:57.827 { 00:13:57.827 "name": "raid_bdev1", 00:13:57.827 "raid_level": "raid1", 00:13:57.827 "base_bdevs": [ 00:13:57.827 "malloc1", 00:13:57.827 "malloc2", 00:13:57.827 "malloc3", 00:13:57.827 "malloc4" 00:13:57.827 ], 00:13:57.827 "superblock": false, 00:13:57.827 "method": "bdev_raid_create", 00:13:57.827 "req_id": 1 00:13:57.827 } 00:13:57.827 Got JSON-RPC error response 00:13:57.827 response: 00:13:57.827 { 00:13:57.827 "code": -17, 00:13:57.827 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:57.827 } 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.827 [2024-10-08 16:21:51.131143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.827 [2024-10-08 16:21:51.131460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.827 [2024-10-08 16:21:51.131508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:57.827 [2024-10-08 16:21:51.131544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.827 [2024-10-08 16:21:51.134419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.827 [2024-10-08 16:21:51.134471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.827 [2024-10-08 16:21:51.134594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:57.827 [2024-10-08 16:21:51.134667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:57.827 pt1 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.827 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.085 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.085 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.085 "name": "raid_bdev1", 00:13:58.085 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:58.085 "strip_size_kb": 0, 00:13:58.085 "state": "configuring", 00:13:58.085 "raid_level": "raid1", 00:13:58.085 "superblock": true, 00:13:58.085 "num_base_bdevs": 4, 00:13:58.085 "num_base_bdevs_discovered": 1, 00:13:58.085 "num_base_bdevs_operational": 4, 00:13:58.085 "base_bdevs_list": [ 00:13:58.085 { 00:13:58.085 "name": "pt1", 00:13:58.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.085 "is_configured": true, 00:13:58.085 "data_offset": 2048, 00:13:58.085 "data_size": 63488 00:13:58.085 }, 00:13:58.085 { 00:13:58.085 "name": null, 00:13:58.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.085 "is_configured": false, 00:13:58.085 "data_offset": 2048, 00:13:58.085 "data_size": 63488 00:13:58.085 }, 00:13:58.085 { 00:13:58.085 "name": null, 00:13:58.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.085 "is_configured": false, 00:13:58.085 "data_offset": 2048, 00:13:58.085 "data_size": 63488 00:13:58.085 }, 00:13:58.085 { 00:13:58.085 "name": null, 00:13:58.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.085 "is_configured": false, 00:13:58.085 "data_offset": 2048, 00:13:58.085 "data_size": 63488 00:13:58.085 } 00:13:58.085 ] 00:13:58.085 }' 00:13:58.085 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.085 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.657 [2024-10-08 16:21:51.703256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:58.657 [2024-10-08 16:21:51.703367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.657 [2024-10-08 16:21:51.703405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:58.657 [2024-10-08 16:21:51.703423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.657 [2024-10-08 16:21:51.704023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.657 [2024-10-08 16:21:51.704062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:58.657 [2024-10-08 16:21:51.704164] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:58.657 [2024-10-08 16:21:51.704206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:58.657 pt2 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.657 [2024-10-08 16:21:51.711248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.657 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.657 "name": "raid_bdev1", 00:13:58.657 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:58.657 "strip_size_kb": 0, 00:13:58.657 "state": "configuring", 00:13:58.657 "raid_level": "raid1", 00:13:58.657 "superblock": true, 00:13:58.657 "num_base_bdevs": 4, 00:13:58.657 "num_base_bdevs_discovered": 1, 00:13:58.657 "num_base_bdevs_operational": 4, 00:13:58.657 "base_bdevs_list": [ 00:13:58.657 { 00:13:58.657 "name": "pt1", 00:13:58.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.657 "is_configured": true, 00:13:58.657 "data_offset": 2048, 00:13:58.657 "data_size": 63488 00:13:58.657 }, 00:13:58.657 { 00:13:58.657 "name": null, 00:13:58.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.657 "is_configured": false, 00:13:58.657 "data_offset": 0, 00:13:58.657 "data_size": 63488 00:13:58.657 }, 00:13:58.657 { 00:13:58.657 "name": null, 00:13:58.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.658 "is_configured": false, 00:13:58.658 "data_offset": 2048, 00:13:58.658 "data_size": 63488 00:13:58.658 }, 00:13:58.658 { 00:13:58.658 "name": null, 00:13:58.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.658 "is_configured": false, 00:13:58.658 "data_offset": 2048, 00:13:58.658 "data_size": 63488 00:13:58.658 } 00:13:58.658 ] 00:13:58.658 }' 00:13:58.658 16:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.658 16:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.225 [2024-10-08 16:21:52.251426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:59.225 [2024-10-08 16:21:52.251542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.225 [2024-10-08 16:21:52.251582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:59.225 [2024-10-08 16:21:52.251599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.225 [2024-10-08 16:21:52.252156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.225 [2024-10-08 16:21:52.252190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:59.225 [2024-10-08 16:21:52.252298] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:59.225 [2024-10-08 16:21:52.252329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:59.225 pt2 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.225 [2024-10-08 16:21:52.263367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:59.225 [2024-10-08 16:21:52.263429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.225 [2024-10-08 16:21:52.263459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:59.225 [2024-10-08 16:21:52.263473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.225 [2024-10-08 16:21:52.263937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.225 [2024-10-08 16:21:52.263972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:59.225 [2024-10-08 16:21:52.264052] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:59.225 [2024-10-08 16:21:52.264079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:59.225 pt3 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.225 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.225 [2024-10-08 16:21:52.271333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:59.225 [2024-10-08 16:21:52.271382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.225 [2024-10-08 16:21:52.271408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:59.225 [2024-10-08 16:21:52.271422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.226 [2024-10-08 16:21:52.271876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.226 [2024-10-08 16:21:52.271917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:59.226 [2024-10-08 16:21:52.271995] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:59.226 [2024-10-08 16:21:52.272022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:59.226 [2024-10-08 16:21:52.272194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:59.226 [2024-10-08 16:21:52.272209] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:59.226 [2024-10-08 16:21:52.272554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:59.226 [2024-10-08 16:21:52.272755] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:59.226 [2024-10-08 16:21:52.272776] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:59.226 [2024-10-08 16:21:52.272932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.226 pt4 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.226 "name": "raid_bdev1", 00:13:59.226 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:59.226 "strip_size_kb": 0, 00:13:59.226 "state": "online", 00:13:59.226 "raid_level": "raid1", 00:13:59.226 "superblock": true, 00:13:59.226 "num_base_bdevs": 4, 00:13:59.226 "num_base_bdevs_discovered": 4, 00:13:59.226 "num_base_bdevs_operational": 4, 00:13:59.226 "base_bdevs_list": [ 00:13:59.226 { 00:13:59.226 "name": "pt1", 00:13:59.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.226 "is_configured": true, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 }, 00:13:59.226 { 00:13:59.226 "name": "pt2", 00:13:59.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.226 "is_configured": true, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 }, 00:13:59.226 { 00:13:59.226 "name": "pt3", 00:13:59.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.226 "is_configured": true, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 }, 00:13:59.226 { 00:13:59.226 "name": "pt4", 00:13:59.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.226 "is_configured": true, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 } 00:13:59.226 ] 00:13:59.226 }' 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.226 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.484 [2024-10-08 16:21:52.767963] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.484 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.743 "name": "raid_bdev1", 00:13:59.743 "aliases": [ 00:13:59.743 "b3c993ec-a50f-4556-a06d-8fe3d7ed2848" 00:13:59.743 ], 00:13:59.743 "product_name": "Raid Volume", 00:13:59.743 "block_size": 512, 00:13:59.743 "num_blocks": 63488, 00:13:59.743 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:59.743 "assigned_rate_limits": { 00:13:59.743 "rw_ios_per_sec": 0, 00:13:59.743 "rw_mbytes_per_sec": 0, 00:13:59.743 "r_mbytes_per_sec": 0, 00:13:59.743 "w_mbytes_per_sec": 0 00:13:59.743 }, 00:13:59.743 "claimed": false, 00:13:59.743 "zoned": false, 00:13:59.743 "supported_io_types": { 00:13:59.743 "read": true, 00:13:59.743 "write": true, 00:13:59.743 "unmap": false, 00:13:59.743 "flush": false, 00:13:59.743 "reset": true, 00:13:59.743 "nvme_admin": false, 00:13:59.743 "nvme_io": false, 00:13:59.743 "nvme_io_md": false, 00:13:59.743 "write_zeroes": true, 00:13:59.743 "zcopy": false, 00:13:59.743 "get_zone_info": false, 00:13:59.743 "zone_management": false, 00:13:59.743 "zone_append": false, 00:13:59.743 "compare": false, 00:13:59.743 "compare_and_write": false, 00:13:59.743 "abort": false, 00:13:59.743 "seek_hole": false, 00:13:59.743 "seek_data": false, 00:13:59.743 "copy": false, 00:13:59.743 "nvme_iov_md": false 00:13:59.743 }, 00:13:59.743 "memory_domains": [ 00:13:59.743 { 00:13:59.743 "dma_device_id": "system", 00:13:59.743 "dma_device_type": 1 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.743 "dma_device_type": 2 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "system", 00:13:59.743 "dma_device_type": 1 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.743 "dma_device_type": 2 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "system", 00:13:59.743 "dma_device_type": 1 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.743 "dma_device_type": 2 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "system", 00:13:59.743 "dma_device_type": 1 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.743 "dma_device_type": 2 00:13:59.743 } 00:13:59.743 ], 00:13:59.743 "driver_specific": { 00:13:59.743 "raid": { 00:13:59.743 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:13:59.743 "strip_size_kb": 0, 00:13:59.743 "state": "online", 00:13:59.743 "raid_level": "raid1", 00:13:59.743 "superblock": true, 00:13:59.743 "num_base_bdevs": 4, 00:13:59.743 "num_base_bdevs_discovered": 4, 00:13:59.743 "num_base_bdevs_operational": 4, 00:13:59.743 "base_bdevs_list": [ 00:13:59.743 { 00:13:59.743 "name": "pt1", 00:13:59.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.743 "is_configured": true, 00:13:59.743 "data_offset": 2048, 00:13:59.743 "data_size": 63488 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "name": "pt2", 00:13:59.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.743 "is_configured": true, 00:13:59.743 "data_offset": 2048, 00:13:59.743 "data_size": 63488 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "name": "pt3", 00:13:59.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.743 "is_configured": true, 00:13:59.743 "data_offset": 2048, 00:13:59.743 "data_size": 63488 00:13:59.743 }, 00:13:59.743 { 00:13:59.743 "name": "pt4", 00:13:59.743 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.743 "is_configured": true, 00:13:59.743 "data_offset": 2048, 00:13:59.743 "data_size": 63488 00:13:59.743 } 00:13:59.743 ] 00:13:59.743 } 00:13:59.743 } 00:13:59.743 }' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:59.743 pt2 00:13:59.743 pt3 00:13:59.743 pt4' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.743 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.744 16:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.744 16:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.744 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.001 [2024-10-08 16:21:53.151985] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3c993ec-a50f-4556-a06d-8fe3d7ed2848 '!=' b3c993ec-a50f-4556-a06d-8fe3d7ed2848 ']' 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.001 [2024-10-08 16:21:53.207676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.001 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.002 "name": "raid_bdev1", 00:14:00.002 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:14:00.002 "strip_size_kb": 0, 00:14:00.002 "state": "online", 00:14:00.002 "raid_level": "raid1", 00:14:00.002 "superblock": true, 00:14:00.002 "num_base_bdevs": 4, 00:14:00.002 "num_base_bdevs_discovered": 3, 00:14:00.002 "num_base_bdevs_operational": 3, 00:14:00.002 "base_bdevs_list": [ 00:14:00.002 { 00:14:00.002 "name": null, 00:14:00.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.002 "is_configured": false, 00:14:00.002 "data_offset": 0, 00:14:00.002 "data_size": 63488 00:14:00.002 }, 00:14:00.002 { 00:14:00.002 "name": "pt2", 00:14:00.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.002 "is_configured": true, 00:14:00.002 "data_offset": 2048, 00:14:00.002 "data_size": 63488 00:14:00.002 }, 00:14:00.002 { 00:14:00.002 "name": "pt3", 00:14:00.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.002 "is_configured": true, 00:14:00.002 "data_offset": 2048, 00:14:00.002 "data_size": 63488 00:14:00.002 }, 00:14:00.002 { 00:14:00.002 "name": "pt4", 00:14:00.002 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.002 "is_configured": true, 00:14:00.002 "data_offset": 2048, 00:14:00.002 "data_size": 63488 00:14:00.002 } 00:14:00.002 ] 00:14:00.002 }' 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.002 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.568 [2024-10-08 16:21:53.719890] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.568 [2024-10-08 16:21:53.719953] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.568 [2024-10-08 16:21:53.720048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.568 [2024-10-08 16:21:53.720149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.568 [2024-10-08 16:21:53.720166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.568 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 [2024-10-08 16:21:53.819862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.569 [2024-10-08 16:21:53.819968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.569 [2024-10-08 16:21:53.820001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:00.569 [2024-10-08 16:21:53.820016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.569 [2024-10-08 16:21:53.822913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.569 [2024-10-08 16:21:53.822958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.569 [2024-10-08 16:21:53.823071] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.569 [2024-10-08 16:21:53.823127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.569 pt2 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.569 "name": "raid_bdev1", 00:14:00.569 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:14:00.569 "strip_size_kb": 0, 00:14:00.569 "state": "configuring", 00:14:00.569 "raid_level": "raid1", 00:14:00.569 "superblock": true, 00:14:00.569 "num_base_bdevs": 4, 00:14:00.569 "num_base_bdevs_discovered": 1, 00:14:00.569 "num_base_bdevs_operational": 3, 00:14:00.569 "base_bdevs_list": [ 00:14:00.569 { 00:14:00.569 "name": null, 00:14:00.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.569 "is_configured": false, 00:14:00.569 "data_offset": 2048, 00:14:00.569 "data_size": 63488 00:14:00.569 }, 00:14:00.569 { 00:14:00.569 "name": "pt2", 00:14:00.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.569 "is_configured": true, 00:14:00.569 "data_offset": 2048, 00:14:00.569 "data_size": 63488 00:14:00.569 }, 00:14:00.569 { 00:14:00.569 "name": null, 00:14:00.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.569 "is_configured": false, 00:14:00.569 "data_offset": 2048, 00:14:00.569 "data_size": 63488 00:14:00.569 }, 00:14:00.569 { 00:14:00.569 "name": null, 00:14:00.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.569 "is_configured": false, 00:14:00.569 "data_offset": 2048, 00:14:00.569 "data_size": 63488 00:14:00.569 } 00:14:00.569 ] 00:14:00.569 }' 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.569 16:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.136 [2024-10-08 16:21:54.344071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:01.136 [2024-10-08 16:21:54.344177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.136 [2024-10-08 16:21:54.344213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:01.136 [2024-10-08 16:21:54.344229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.136 [2024-10-08 16:21:54.344834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.136 [2024-10-08 16:21:54.344861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:01.136 [2024-10-08 16:21:54.344969] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:01.136 [2024-10-08 16:21:54.345000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:01.136 pt3 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.136 "name": "raid_bdev1", 00:14:01.136 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:14:01.136 "strip_size_kb": 0, 00:14:01.136 "state": "configuring", 00:14:01.136 "raid_level": "raid1", 00:14:01.136 "superblock": true, 00:14:01.136 "num_base_bdevs": 4, 00:14:01.136 "num_base_bdevs_discovered": 2, 00:14:01.136 "num_base_bdevs_operational": 3, 00:14:01.136 "base_bdevs_list": [ 00:14:01.136 { 00:14:01.136 "name": null, 00:14:01.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.136 "is_configured": false, 00:14:01.136 "data_offset": 2048, 00:14:01.136 "data_size": 63488 00:14:01.136 }, 00:14:01.136 { 00:14:01.136 "name": "pt2", 00:14:01.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.136 "is_configured": true, 00:14:01.136 "data_offset": 2048, 00:14:01.136 "data_size": 63488 00:14:01.136 }, 00:14:01.136 { 00:14:01.136 "name": "pt3", 00:14:01.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.136 "is_configured": true, 00:14:01.136 "data_offset": 2048, 00:14:01.136 "data_size": 63488 00:14:01.136 }, 00:14:01.136 { 00:14:01.136 "name": null, 00:14:01.136 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.136 "is_configured": false, 00:14:01.136 "data_offset": 2048, 00:14:01.136 "data_size": 63488 00:14:01.136 } 00:14:01.136 ] 00:14:01.136 }' 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.136 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.704 [2024-10-08 16:21:54.836210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:01.704 [2024-10-08 16:21:54.836323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.704 [2024-10-08 16:21:54.836362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:01.704 [2024-10-08 16:21:54.836377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.704 [2024-10-08 16:21:54.837010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.704 [2024-10-08 16:21:54.837036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:01.704 [2024-10-08 16:21:54.837141] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:01.704 [2024-10-08 16:21:54.837179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:01.704 [2024-10-08 16:21:54.837361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:01.704 [2024-10-08 16:21:54.837376] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.704 [2024-10-08 16:21:54.837721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:01.704 [2024-10-08 16:21:54.837917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:01.704 [2024-10-08 16:21:54.837945] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:01.704 [2024-10-08 16:21:54.838108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.704 pt4 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.704 "name": "raid_bdev1", 00:14:01.704 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:14:01.704 "strip_size_kb": 0, 00:14:01.704 "state": "online", 00:14:01.704 "raid_level": "raid1", 00:14:01.704 "superblock": true, 00:14:01.704 "num_base_bdevs": 4, 00:14:01.704 "num_base_bdevs_discovered": 3, 00:14:01.704 "num_base_bdevs_operational": 3, 00:14:01.704 "base_bdevs_list": [ 00:14:01.704 { 00:14:01.704 "name": null, 00:14:01.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.704 "is_configured": false, 00:14:01.704 "data_offset": 2048, 00:14:01.704 "data_size": 63488 00:14:01.704 }, 00:14:01.704 { 00:14:01.704 "name": "pt2", 00:14:01.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.704 "is_configured": true, 00:14:01.704 "data_offset": 2048, 00:14:01.704 "data_size": 63488 00:14:01.704 }, 00:14:01.704 { 00:14:01.704 "name": "pt3", 00:14:01.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.704 "is_configured": true, 00:14:01.704 "data_offset": 2048, 00:14:01.704 "data_size": 63488 00:14:01.704 }, 00:14:01.704 { 00:14:01.704 "name": "pt4", 00:14:01.704 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.704 "is_configured": true, 00:14:01.704 "data_offset": 2048, 00:14:01.704 "data_size": 63488 00:14:01.704 } 00:14:01.704 ] 00:14:01.704 }' 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.704 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.271 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.272 [2024-10-08 16:21:55.368337] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.272 [2024-10-08 16:21:55.368393] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.272 [2024-10-08 16:21:55.368501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.272 [2024-10-08 16:21:55.368610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.272 [2024-10-08 16:21:55.368638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.272 [2024-10-08 16:21:55.436346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.272 [2024-10-08 16:21:55.436475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.272 [2024-10-08 16:21:55.436506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:02.272 [2024-10-08 16:21:55.436548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.272 [2024-10-08 16:21:55.439474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.272 [2024-10-08 16:21:55.439550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.272 [2024-10-08 16:21:55.439663] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:02.272 [2024-10-08 16:21:55.439728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.272 [2024-10-08 16:21:55.439890] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:02.272 [2024-10-08 16:21:55.439916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.272 [2024-10-08 16:21:55.439938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:02.272 [2024-10-08 16:21:55.440015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.272 [2024-10-08 16:21:55.440156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.272 pt1 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.272 "name": "raid_bdev1", 00:14:02.272 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:14:02.272 "strip_size_kb": 0, 00:14:02.272 "state": "configuring", 00:14:02.272 "raid_level": "raid1", 00:14:02.272 "superblock": true, 00:14:02.272 "num_base_bdevs": 4, 00:14:02.272 "num_base_bdevs_discovered": 2, 00:14:02.272 "num_base_bdevs_operational": 3, 00:14:02.272 "base_bdevs_list": [ 00:14:02.272 { 00:14:02.272 "name": null, 00:14:02.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.272 "is_configured": false, 00:14:02.272 "data_offset": 2048, 00:14:02.272 "data_size": 63488 00:14:02.272 }, 00:14:02.272 { 00:14:02.272 "name": "pt2", 00:14:02.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.272 "is_configured": true, 00:14:02.272 "data_offset": 2048, 00:14:02.272 "data_size": 63488 00:14:02.272 }, 00:14:02.272 { 00:14:02.272 "name": "pt3", 00:14:02.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.272 "is_configured": true, 00:14:02.272 "data_offset": 2048, 00:14:02.272 "data_size": 63488 00:14:02.272 }, 00:14:02.272 { 00:14:02.272 "name": null, 00:14:02.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.272 "is_configured": false, 00:14:02.272 "data_offset": 2048, 00:14:02.272 "data_size": 63488 00:14:02.272 } 00:14:02.272 ] 00:14:02.272 }' 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.272 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.839 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.839 [2024-10-08 16:21:55.996565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:02.839 [2024-10-08 16:21:55.996905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.839 [2024-10-08 16:21:55.996953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:02.839 [2024-10-08 16:21:55.996970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.839 [2024-10-08 16:21:55.997542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.839 [2024-10-08 16:21:55.997568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:02.839 [2024-10-08 16:21:55.997674] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:02.839 [2024-10-08 16:21:55.997706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:02.839 [2024-10-08 16:21:55.997868] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:02.839 [2024-10-08 16:21:55.997884] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.839 [2024-10-08 16:21:55.998200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:02.839 [2024-10-08 16:21:55.998397] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:02.839 [2024-10-08 16:21:55.998417] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:02.839 [2024-10-08 16:21:55.998605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.839 pt4 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.839 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.839 "name": "raid_bdev1", 00:14:02.839 "uuid": "b3c993ec-a50f-4556-a06d-8fe3d7ed2848", 00:14:02.839 "strip_size_kb": 0, 00:14:02.839 "state": "online", 00:14:02.839 "raid_level": "raid1", 00:14:02.839 "superblock": true, 00:14:02.839 "num_base_bdevs": 4, 00:14:02.839 "num_base_bdevs_discovered": 3, 00:14:02.839 "num_base_bdevs_operational": 3, 00:14:02.839 "base_bdevs_list": [ 00:14:02.839 { 00:14:02.839 "name": null, 00:14:02.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.839 "is_configured": false, 00:14:02.839 "data_offset": 2048, 00:14:02.839 "data_size": 63488 00:14:02.839 }, 00:14:02.839 { 00:14:02.839 "name": "pt2", 00:14:02.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.839 "is_configured": true, 00:14:02.839 "data_offset": 2048, 00:14:02.839 "data_size": 63488 00:14:02.839 }, 00:14:02.839 { 00:14:02.839 "name": "pt3", 00:14:02.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.839 "is_configured": true, 00:14:02.839 "data_offset": 2048, 00:14:02.839 "data_size": 63488 00:14:02.840 }, 00:14:02.840 { 00:14:02.840 "name": "pt4", 00:14:02.840 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.840 "is_configured": true, 00:14:02.840 "data_offset": 2048, 00:14:02.840 "data_size": 63488 00:14:02.840 } 00:14:02.840 ] 00:14:02.840 }' 00:14:02.840 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.840 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.407 [2024-10-08 16:21:56.609027] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b3c993ec-a50f-4556-a06d-8fe3d7ed2848 '!=' b3c993ec-a50f-4556-a06d-8fe3d7ed2848 ']' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75024 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75024 ']' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75024 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75024 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.407 killing process with pid 75024 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75024' 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75024 00:14:03.407 [2024-10-08 16:21:56.688475] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.407 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75024 00:14:03.407 [2024-10-08 16:21:56.688656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.407 [2024-10-08 16:21:56.688757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.407 [2024-10-08 16:21:56.688778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:03.973 [2024-10-08 16:21:57.043716] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.952 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:04.952 00:14:04.952 real 0m9.593s 00:14:04.952 user 0m15.596s 00:14:04.952 sys 0m1.430s 00:14:04.952 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.952 ************************************ 00:14:04.952 END TEST raid_superblock_test 00:14:04.952 ************************************ 00:14:04.952 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.210 16:21:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:05.210 16:21:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:05.210 16:21:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.210 16:21:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.210 ************************************ 00:14:05.210 START TEST raid_read_error_test 00:14:05.210 ************************************ 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZwNqnBvoMj 00:14:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75528 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75528 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75528 ']' 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.210 16:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.210 [2024-10-08 16:21:58.442948] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:05.210 [2024-10-08 16:21:58.443147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75528 ] 00:14:05.467 [2024-10-08 16:21:58.619176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.726 [2024-10-08 16:21:58.863429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.984 [2024-10-08 16:21:59.068369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.984 [2024-10-08 16:21:59.068431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.242 BaseBdev1_malloc 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.242 true 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.242 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.242 [2024-10-08 16:21:59.559990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:06.242 [2024-10-08 16:21:59.560343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.242 [2024-10-08 16:21:59.560381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:06.242 [2024-10-08 16:21:59.560402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.242 [2024-10-08 16:21:59.563233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.242 [2024-10-08 16:21:59.563286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.242 BaseBdev1 00:14:06.500 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.500 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.500 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 BaseBdev2_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 true 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 [2024-10-08 16:21:59.636997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:06.501 [2024-10-08 16:21:59.637082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.501 [2024-10-08 16:21:59.637109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:06.501 [2024-10-08 16:21:59.637128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.501 [2024-10-08 16:21:59.639931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.501 [2024-10-08 16:21:59.639983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.501 BaseBdev2 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 BaseBdev3_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 true 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 [2024-10-08 16:21:59.701422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:06.501 [2024-10-08 16:21:59.701527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.501 [2024-10-08 16:21:59.701557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:06.501 [2024-10-08 16:21:59.701577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.501 [2024-10-08 16:21:59.704333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.501 [2024-10-08 16:21:59.704385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.501 BaseBdev3 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 BaseBdev4_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 true 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 [2024-10-08 16:21:59.758484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:06.501 [2024-10-08 16:21:59.758872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.501 [2024-10-08 16:21:59.759072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.501 [2024-10-08 16:21:59.759112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.501 [2024-10-08 16:21:59.762072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.501 [2024-10-08 16:21:59.762257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:06.501 BaseBdev4 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 [2024-10-08 16:21:59.766625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.501 [2024-10-08 16:21:59.769216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.501 [2024-10-08 16:21:59.769456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.501 [2024-10-08 16:21:59.769621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.501 [2024-10-08 16:21:59.769977] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:06.501 [2024-10-08 16:21:59.770121] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.501 [2024-10-08 16:21:59.770483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:06.501 [2024-10-08 16:21:59.770873] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:06.501 [2024-10-08 16:21:59.770994] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:06.501 [2024-10-08 16:21:59.771372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.501 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.804 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.804 "name": "raid_bdev1", 00:14:06.804 "uuid": "609de68b-36a0-4636-b785-dc1d232db6b2", 00:14:06.804 "strip_size_kb": 0, 00:14:06.804 "state": "online", 00:14:06.804 "raid_level": "raid1", 00:14:06.804 "superblock": true, 00:14:06.804 "num_base_bdevs": 4, 00:14:06.804 "num_base_bdevs_discovered": 4, 00:14:06.804 "num_base_bdevs_operational": 4, 00:14:06.804 "base_bdevs_list": [ 00:14:06.804 { 00:14:06.804 "name": "BaseBdev1", 00:14:06.804 "uuid": "605cb529-c944-5f3d-91be-d02dc8c11731", 00:14:06.804 "is_configured": true, 00:14:06.804 "data_offset": 2048, 00:14:06.804 "data_size": 63488 00:14:06.804 }, 00:14:06.804 { 00:14:06.804 "name": "BaseBdev2", 00:14:06.804 "uuid": "6339aee4-d593-5304-9dfc-eefd809b5be0", 00:14:06.804 "is_configured": true, 00:14:06.804 "data_offset": 2048, 00:14:06.804 "data_size": 63488 00:14:06.804 }, 00:14:06.804 { 00:14:06.804 "name": "BaseBdev3", 00:14:06.804 "uuid": "681e45df-baad-5b1d-863c-ae60f12323d3", 00:14:06.804 "is_configured": true, 00:14:06.804 "data_offset": 2048, 00:14:06.804 "data_size": 63488 00:14:06.804 }, 00:14:06.804 { 00:14:06.804 "name": "BaseBdev4", 00:14:06.804 "uuid": "b4d1da40-97f2-5a14-8ebf-51350e5f3bf0", 00:14:06.804 "is_configured": true, 00:14:06.804 "data_offset": 2048, 00:14:06.804 "data_size": 63488 00:14:06.804 } 00:14:06.804 ] 00:14:06.804 }' 00:14:06.804 16:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.804 16:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.062 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:07.062 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:07.319 [2024-10-08 16:22:00.424949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.256 "name": "raid_bdev1", 00:14:08.256 "uuid": "609de68b-36a0-4636-b785-dc1d232db6b2", 00:14:08.256 "strip_size_kb": 0, 00:14:08.256 "state": "online", 00:14:08.256 "raid_level": "raid1", 00:14:08.256 "superblock": true, 00:14:08.256 "num_base_bdevs": 4, 00:14:08.256 "num_base_bdevs_discovered": 4, 00:14:08.256 "num_base_bdevs_operational": 4, 00:14:08.256 "base_bdevs_list": [ 00:14:08.256 { 00:14:08.256 "name": "BaseBdev1", 00:14:08.256 "uuid": "605cb529-c944-5f3d-91be-d02dc8c11731", 00:14:08.256 "is_configured": true, 00:14:08.256 "data_offset": 2048, 00:14:08.256 "data_size": 63488 00:14:08.256 }, 00:14:08.256 { 00:14:08.256 "name": "BaseBdev2", 00:14:08.256 "uuid": "6339aee4-d593-5304-9dfc-eefd809b5be0", 00:14:08.256 "is_configured": true, 00:14:08.256 "data_offset": 2048, 00:14:08.256 "data_size": 63488 00:14:08.256 }, 00:14:08.256 { 00:14:08.256 "name": "BaseBdev3", 00:14:08.256 "uuid": "681e45df-baad-5b1d-863c-ae60f12323d3", 00:14:08.256 "is_configured": true, 00:14:08.256 "data_offset": 2048, 00:14:08.256 "data_size": 63488 00:14:08.256 }, 00:14:08.256 { 00:14:08.256 "name": "BaseBdev4", 00:14:08.256 "uuid": "b4d1da40-97f2-5a14-8ebf-51350e5f3bf0", 00:14:08.256 "is_configured": true, 00:14:08.256 "data_offset": 2048, 00:14:08.256 "data_size": 63488 00:14:08.256 } 00:14:08.256 ] 00:14:08.256 }' 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.256 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.865 [2024-10-08 16:22:01.851507] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.865 [2024-10-08 16:22:01.851815] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.865 [2024-10-08 16:22:01.855276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.865 [2024-10-08 16:22:01.855488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.865 [2024-10-08 16:22:01.855799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.865 [2024-10-08 16:22:01.855964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:14:08.865 "results": [ 00:14:08.865 { 00:14:08.865 "job": "raid_bdev1", 00:14:08.865 "core_mask": "0x1", 00:14:08.865 "workload": "randrw", 00:14:08.865 "percentage": 50, 00:14:08.865 "status": "finished", 00:14:08.865 "queue_depth": 1, 00:14:08.865 "io_size": 131072, 00:14:08.865 "runtime": 1.424156, 00:14:08.865 "iops": 7161.434561944056, 00:14:08.865 "mibps": 895.179320243007, 00:14:08.865 "io_failed": 0, 00:14:08.865 "io_timeout": 0, 00:14:08.865 "avg_latency_us": 135.15714517466063, 00:14:08.865 "min_latency_us": 41.89090909090909, 00:14:08.865 "max_latency_us": 2040.5527272727272 00:14:08.865 } 00:14:08.865 ], 00:14:08.865 "core_count": 1 00:14:08.865 } 00:14:08.865 te offline 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75528 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75528 ']' 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75528 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75528 00:14:08.865 killing process with pid 75528 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75528' 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75528 00:14:08.865 [2024-10-08 16:22:01.900065] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.865 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75528 00:14:09.124 [2024-10-08 16:22:02.191168] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZwNqnBvoMj 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:10.502 00:14:10.502 real 0m5.133s 00:14:10.502 user 0m6.374s 00:14:10.502 sys 0m0.594s 00:14:10.502 ************************************ 00:14:10.502 END TEST raid_read_error_test 00:14:10.502 ************************************ 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.502 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.502 16:22:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:10.502 16:22:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:10.502 16:22:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.502 16:22:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.502 ************************************ 00:14:10.502 START TEST raid_write_error_test 00:14:10.502 ************************************ 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DC4N6Y3Qmy 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75674 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75674 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75674 ']' 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.502 16:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.502 [2024-10-08 16:22:03.607287] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:10.502 [2024-10-08 16:22:03.607449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75674 ] 00:14:10.502 [2024-10-08 16:22:03.769433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.759 [2024-10-08 16:22:04.005586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.016 [2024-10-08 16:22:04.206930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.016 [2024-10-08 16:22:04.206994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 BaseBdev1_malloc 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 true 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 [2024-10-08 16:22:04.695084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:11.582 [2024-10-08 16:22:04.695168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.582 [2024-10-08 16:22:04.695199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:11.582 [2024-10-08 16:22:04.695218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.582 [2024-10-08 16:22:04.698165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.582 [2024-10-08 16:22:04.698217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.582 BaseBdev1 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 BaseBdev2_malloc 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 true 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:11.582 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 [2024-10-08 16:22:04.768636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:11.583 [2024-10-08 16:22:04.768930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.583 [2024-10-08 16:22:04.768967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:11.583 [2024-10-08 16:22:04.768987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.583 [2024-10-08 16:22:04.771724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.583 [2024-10-08 16:22:04.771772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.583 BaseBdev2 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 BaseBdev3_malloc 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 true 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 [2024-10-08 16:22:04.828610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:11.583 [2024-10-08 16:22:04.828699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.583 [2024-10-08 16:22:04.828732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:11.583 [2024-10-08 16:22:04.828751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.583 [2024-10-08 16:22:04.831459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.583 [2024-10-08 16:22:04.831760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:11.583 BaseBdev3 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 BaseBdev4_malloc 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 true 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 [2024-10-08 16:22:04.886625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:11.583 [2024-10-08 16:22:04.886920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.583 [2024-10-08 16:22:04.886959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:11.583 [2024-10-08 16:22:04.886979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.583 [2024-10-08 16:22:04.889775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.583 [2024-10-08 16:22:04.889826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:11.583 BaseBdev4 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.583 [2024-10-08 16:22:04.894775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.583 [2024-10-08 16:22:04.897274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.583 [2024-10-08 16:22:04.897564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.583 [2024-10-08 16:22:04.897673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.583 [2024-10-08 16:22:04.897964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:11.583 [2024-10-08 16:22:04.897988] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.583 [2024-10-08 16:22:04.898292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:11.583 [2024-10-08 16:22:04.898509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:11.583 [2024-10-08 16:22:04.898608] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:11.583 [2024-10-08 16:22:04.898858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.583 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.842 "name": "raid_bdev1", 00:14:11.842 "uuid": "4cef7f9e-2a91-48e2-a69a-f948ea6ec6ce", 00:14:11.842 "strip_size_kb": 0, 00:14:11.842 "state": "online", 00:14:11.842 "raid_level": "raid1", 00:14:11.842 "superblock": true, 00:14:11.842 "num_base_bdevs": 4, 00:14:11.842 "num_base_bdevs_discovered": 4, 00:14:11.842 "num_base_bdevs_operational": 4, 00:14:11.842 "base_bdevs_list": [ 00:14:11.842 { 00:14:11.842 "name": "BaseBdev1", 00:14:11.842 "uuid": "58a2beae-d901-5e19-a97f-826335c60738", 00:14:11.842 "is_configured": true, 00:14:11.842 "data_offset": 2048, 00:14:11.842 "data_size": 63488 00:14:11.842 }, 00:14:11.842 { 00:14:11.842 "name": "BaseBdev2", 00:14:11.842 "uuid": "7564cd4a-66a1-5de2-8dad-bd38f4fec19d", 00:14:11.842 "is_configured": true, 00:14:11.842 "data_offset": 2048, 00:14:11.842 "data_size": 63488 00:14:11.842 }, 00:14:11.842 { 00:14:11.842 "name": "BaseBdev3", 00:14:11.842 "uuid": "cde5adbc-a6db-5d22-9858-57d0d46f7401", 00:14:11.842 "is_configured": true, 00:14:11.842 "data_offset": 2048, 00:14:11.842 "data_size": 63488 00:14:11.842 }, 00:14:11.842 { 00:14:11.842 "name": "BaseBdev4", 00:14:11.842 "uuid": "af553bab-8d7e-5ec5-8ddf-a513b2332b60", 00:14:11.842 "is_configured": true, 00:14:11.842 "data_offset": 2048, 00:14:11.842 "data_size": 63488 00:14:11.842 } 00:14:11.842 ] 00:14:11.842 }' 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.842 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.099 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:12.099 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:12.357 [2024-10-08 16:22:05.484387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.294 [2024-10-08 16:22:06.385881] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:13.294 [2024-10-08 16:22:06.385966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.294 [2024-10-08 16:22:06.386232] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.294 "name": "raid_bdev1", 00:14:13.294 "uuid": "4cef7f9e-2a91-48e2-a69a-f948ea6ec6ce", 00:14:13.294 "strip_size_kb": 0, 00:14:13.294 "state": "online", 00:14:13.294 "raid_level": "raid1", 00:14:13.294 "superblock": true, 00:14:13.294 "num_base_bdevs": 4, 00:14:13.294 "num_base_bdevs_discovered": 3, 00:14:13.294 "num_base_bdevs_operational": 3, 00:14:13.294 "base_bdevs_list": [ 00:14:13.294 { 00:14:13.294 "name": null, 00:14:13.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.294 "is_configured": false, 00:14:13.294 "data_offset": 0, 00:14:13.294 "data_size": 63488 00:14:13.294 }, 00:14:13.294 { 00:14:13.294 "name": "BaseBdev2", 00:14:13.294 "uuid": "7564cd4a-66a1-5de2-8dad-bd38f4fec19d", 00:14:13.294 "is_configured": true, 00:14:13.294 "data_offset": 2048, 00:14:13.294 "data_size": 63488 00:14:13.294 }, 00:14:13.294 { 00:14:13.294 "name": "BaseBdev3", 00:14:13.294 "uuid": "cde5adbc-a6db-5d22-9858-57d0d46f7401", 00:14:13.294 "is_configured": true, 00:14:13.294 "data_offset": 2048, 00:14:13.294 "data_size": 63488 00:14:13.294 }, 00:14:13.294 { 00:14:13.294 "name": "BaseBdev4", 00:14:13.294 "uuid": "af553bab-8d7e-5ec5-8ddf-a513b2332b60", 00:14:13.294 "is_configured": true, 00:14:13.294 "data_offset": 2048, 00:14:13.294 "data_size": 63488 00:14:13.294 } 00:14:13.294 ] 00:14:13.294 }' 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.294 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.862 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.862 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.863 [2024-10-08 16:22:06.901264] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.863 [2024-10-08 16:22:06.901559] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.863 [2024-10-08 16:22:06.905083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.863 { 00:14:13.863 "results": [ 00:14:13.863 { 00:14:13.863 "job": "raid_bdev1", 00:14:13.863 "core_mask": "0x1", 00:14:13.863 "workload": "randrw", 00:14:13.863 "percentage": 50, 00:14:13.863 "status": "finished", 00:14:13.863 "queue_depth": 1, 00:14:13.863 "io_size": 131072, 00:14:13.863 "runtime": 1.41471, 00:14:13.863 "iops": 8394.653321175365, 00:14:13.863 "mibps": 1049.3316651469206, 00:14:13.863 "io_failed": 0, 00:14:13.863 "io_timeout": 0, 00:14:13.863 "avg_latency_us": 115.08508649989282, 00:14:13.863 "min_latency_us": 41.42545454545454, 00:14:13.863 "max_latency_us": 1936.290909090909 00:14:13.863 } 00:14:13.863 ], 00:14:13.863 "core_count": 1 00:14:13.863 } 00:14:13.863 [2024-10-08 16:22:06.905282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.863 [2024-10-08 16:22:06.905497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.863 [2024-10-08 16:22:06.905531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75674 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75674 ']' 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75674 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75674 00:14:13.863 killing process with pid 75674 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75674' 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75674 00:14:13.863 [2024-10-08 16:22:06.949802] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.863 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75674 00:14:14.120 [2024-10-08 16:22:07.238009] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DC4N6Y3Qmy 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:15.494 ************************************ 00:14:15.494 END TEST raid_write_error_test 00:14:15.494 ************************************ 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:15.494 00:14:15.494 real 0m5.014s 00:14:15.494 user 0m6.091s 00:14:15.494 sys 0m0.614s 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.494 16:22:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.494 16:22:08 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:15.494 16:22:08 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:15.494 16:22:08 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:15.494 16:22:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:15.494 16:22:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.494 16:22:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.494 ************************************ 00:14:15.494 START TEST raid_rebuild_test 00:14:15.494 ************************************ 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:15.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75823 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75823 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75823 ']' 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.494 16:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.494 [2024-10-08 16:22:08.691050] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:15.494 [2024-10-08 16:22:08.691539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.494 Zero copy mechanism will not be used. 00:14:15.494 -allocations --file-prefix=spdk_pid75823 ] 00:14:15.753 [2024-10-08 16:22:08.866736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.011 [2024-10-08 16:22:09.108559] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.011 [2024-10-08 16:22:09.313643] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.011 [2024-10-08 16:22:09.313726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 BaseBdev1_malloc 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 [2024-10-08 16:22:09.704338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.578 [2024-10-08 16:22:09.704636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.578 [2024-10-08 16:22:09.704787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.578 [2024-10-08 16:22:09.704918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.578 [2024-10-08 16:22:09.707762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.578 [2024-10-08 16:22:09.707815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.578 BaseBdev1 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 BaseBdev2_malloc 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 [2024-10-08 16:22:09.767425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.578 [2024-10-08 16:22:09.767885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.578 [2024-10-08 16:22:09.768036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.578 [2024-10-08 16:22:09.768168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.578 [2024-10-08 16:22:09.771386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.578 [2024-10-08 16:22:09.771603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.578 BaseBdev2 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 spare_malloc 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 spare_delay 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 [2024-10-08 16:22:09.832152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.578 [2024-10-08 16:22:09.832248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.578 [2024-10-08 16:22:09.832280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:16.578 [2024-10-08 16:22:09.832299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.578 [2024-10-08 16:22:09.835048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.578 [2024-10-08 16:22:09.835280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.578 spare 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 [2024-10-08 16:22:09.844220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.578 [2024-10-08 16:22:09.846625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.578 [2024-10-08 16:22:09.846739] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:16.578 [2024-10-08 16:22:09.846760] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:16.578 [2024-10-08 16:22:09.847092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:16.578 [2024-10-08 16:22:09.847292] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:16.578 [2024-10-08 16:22:09.847308] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:16.578 [2024-10-08 16:22:09.847487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.578 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.579 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.837 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.837 "name": "raid_bdev1", 00:14:16.837 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:16.837 "strip_size_kb": 0, 00:14:16.837 "state": "online", 00:14:16.837 "raid_level": "raid1", 00:14:16.837 "superblock": false, 00:14:16.837 "num_base_bdevs": 2, 00:14:16.837 "num_base_bdevs_discovered": 2, 00:14:16.837 "num_base_bdevs_operational": 2, 00:14:16.837 "base_bdevs_list": [ 00:14:16.837 { 00:14:16.837 "name": "BaseBdev1", 00:14:16.837 "uuid": "4fe765d8-deae-52d5-9faa-d03c0798f0e0", 00:14:16.837 "is_configured": true, 00:14:16.837 "data_offset": 0, 00:14:16.837 "data_size": 65536 00:14:16.837 }, 00:14:16.837 { 00:14:16.837 "name": "BaseBdev2", 00:14:16.837 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:16.837 "is_configured": true, 00:14:16.837 "data_offset": 0, 00:14:16.837 "data_size": 65536 00:14:16.837 } 00:14:16.837 ] 00:14:16.837 }' 00:14:16.837 16:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.837 16:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:17.096 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.096 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:17.096 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 [2024-10-08 16:22:10.380786] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.096 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.354 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:17.612 [2024-10-08 16:22:10.748648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:17.612 /dev/nbd0 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.612 1+0 records in 00:14:17.612 1+0 records out 00:14:17.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070329 s, 5.8 MB/s 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:17.612 16:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:24.205 65536+0 records in 00:14:24.205 65536+0 records out 00:14:24.205 33554432 bytes (34 MB, 32 MiB) copied, 6.70256 s, 5.0 MB/s 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.206 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.771 [2024-10-08 16:22:17.801282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.771 [2024-10-08 16:22:17.813390] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.771 "name": "raid_bdev1", 00:14:24.771 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:24.771 "strip_size_kb": 0, 00:14:24.771 "state": "online", 00:14:24.771 "raid_level": "raid1", 00:14:24.771 "superblock": false, 00:14:24.771 "num_base_bdevs": 2, 00:14:24.771 "num_base_bdevs_discovered": 1, 00:14:24.771 "num_base_bdevs_operational": 1, 00:14:24.771 "base_bdevs_list": [ 00:14:24.771 { 00:14:24.771 "name": null, 00:14:24.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.771 "is_configured": false, 00:14:24.771 "data_offset": 0, 00:14:24.771 "data_size": 65536 00:14:24.771 }, 00:14:24.771 { 00:14:24.771 "name": "BaseBdev2", 00:14:24.771 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:24.771 "is_configured": true, 00:14:24.771 "data_offset": 0, 00:14:24.771 "data_size": 65536 00:14:24.771 } 00:14:24.771 ] 00:14:24.771 }' 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.771 16:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.029 16:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.029 16:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.029 16:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.029 [2024-10-08 16:22:18.301588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.029 [2024-10-08 16:22:18.317156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:25.029 16:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.029 16:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:25.029 [2024-10-08 16:22:18.319752] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.410 "name": "raid_bdev1", 00:14:26.410 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:26.410 "strip_size_kb": 0, 00:14:26.410 "state": "online", 00:14:26.410 "raid_level": "raid1", 00:14:26.410 "superblock": false, 00:14:26.410 "num_base_bdevs": 2, 00:14:26.410 "num_base_bdevs_discovered": 2, 00:14:26.410 "num_base_bdevs_operational": 2, 00:14:26.410 "process": { 00:14:26.410 "type": "rebuild", 00:14:26.410 "target": "spare", 00:14:26.410 "progress": { 00:14:26.410 "blocks": 20480, 00:14:26.410 "percent": 31 00:14:26.410 } 00:14:26.410 }, 00:14:26.410 "base_bdevs_list": [ 00:14:26.410 { 00:14:26.410 "name": "spare", 00:14:26.410 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:26.410 "is_configured": true, 00:14:26.410 "data_offset": 0, 00:14:26.410 "data_size": 65536 00:14:26.410 }, 00:14:26.410 { 00:14:26.410 "name": "BaseBdev2", 00:14:26.410 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:26.410 "is_configured": true, 00:14:26.410 "data_offset": 0, 00:14:26.410 "data_size": 65536 00:14:26.410 } 00:14:26.410 ] 00:14:26.410 }' 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 [2024-10-08 16:22:19.480907] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.410 [2024-10-08 16:22:19.528889] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.410 [2024-10-08 16:22:19.529029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.410 [2024-10-08 16:22:19.529053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.410 [2024-10-08 16:22:19.529068] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.410 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.410 "name": "raid_bdev1", 00:14:26.410 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:26.410 "strip_size_kb": 0, 00:14:26.410 "state": "online", 00:14:26.410 "raid_level": "raid1", 00:14:26.410 "superblock": false, 00:14:26.410 "num_base_bdevs": 2, 00:14:26.410 "num_base_bdevs_discovered": 1, 00:14:26.411 "num_base_bdevs_operational": 1, 00:14:26.411 "base_bdevs_list": [ 00:14:26.411 { 00:14:26.411 "name": null, 00:14:26.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.411 "is_configured": false, 00:14:26.411 "data_offset": 0, 00:14:26.411 "data_size": 65536 00:14:26.411 }, 00:14:26.411 { 00:14:26.411 "name": "BaseBdev2", 00:14:26.411 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:26.411 "is_configured": true, 00:14:26.411 "data_offset": 0, 00:14:26.411 "data_size": 65536 00:14:26.411 } 00:14:26.411 ] 00:14:26.411 }' 00:14:26.411 16:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.411 16:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.977 "name": "raid_bdev1", 00:14:26.977 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:26.977 "strip_size_kb": 0, 00:14:26.977 "state": "online", 00:14:26.977 "raid_level": "raid1", 00:14:26.977 "superblock": false, 00:14:26.977 "num_base_bdevs": 2, 00:14:26.977 "num_base_bdevs_discovered": 1, 00:14:26.977 "num_base_bdevs_operational": 1, 00:14:26.977 "base_bdevs_list": [ 00:14:26.977 { 00:14:26.977 "name": null, 00:14:26.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.977 "is_configured": false, 00:14:26.977 "data_offset": 0, 00:14:26.977 "data_size": 65536 00:14:26.977 }, 00:14:26.977 { 00:14:26.977 "name": "BaseBdev2", 00:14:26.977 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:26.977 "is_configured": true, 00:14:26.977 "data_offset": 0, 00:14:26.977 "data_size": 65536 00:14:26.977 } 00:14:26.977 ] 00:14:26.977 }' 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.977 [2024-10-08 16:22:20.254974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.977 [2024-10-08 16:22:20.269429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.977 16:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:26.977 [2024-10-08 16:22:20.271860] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.353 "name": "raid_bdev1", 00:14:28.353 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:28.353 "strip_size_kb": 0, 00:14:28.353 "state": "online", 00:14:28.353 "raid_level": "raid1", 00:14:28.353 "superblock": false, 00:14:28.353 "num_base_bdevs": 2, 00:14:28.353 "num_base_bdevs_discovered": 2, 00:14:28.353 "num_base_bdevs_operational": 2, 00:14:28.353 "process": { 00:14:28.353 "type": "rebuild", 00:14:28.353 "target": "spare", 00:14:28.353 "progress": { 00:14:28.353 "blocks": 20480, 00:14:28.353 "percent": 31 00:14:28.353 } 00:14:28.353 }, 00:14:28.353 "base_bdevs_list": [ 00:14:28.353 { 00:14:28.353 "name": "spare", 00:14:28.353 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:28.353 "is_configured": true, 00:14:28.353 "data_offset": 0, 00:14:28.353 "data_size": 65536 00:14:28.353 }, 00:14:28.353 { 00:14:28.353 "name": "BaseBdev2", 00:14:28.353 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:28.353 "is_configured": true, 00:14:28.353 "data_offset": 0, 00:14:28.353 "data_size": 65536 00:14:28.353 } 00:14:28.353 ] 00:14:28.353 }' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.353 "name": "raid_bdev1", 00:14:28.353 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:28.353 "strip_size_kb": 0, 00:14:28.353 "state": "online", 00:14:28.353 "raid_level": "raid1", 00:14:28.353 "superblock": false, 00:14:28.353 "num_base_bdevs": 2, 00:14:28.353 "num_base_bdevs_discovered": 2, 00:14:28.353 "num_base_bdevs_operational": 2, 00:14:28.353 "process": { 00:14:28.353 "type": "rebuild", 00:14:28.353 "target": "spare", 00:14:28.353 "progress": { 00:14:28.353 "blocks": 24576, 00:14:28.353 "percent": 37 00:14:28.353 } 00:14:28.353 }, 00:14:28.353 "base_bdevs_list": [ 00:14:28.353 { 00:14:28.353 "name": "spare", 00:14:28.353 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:28.353 "is_configured": true, 00:14:28.353 "data_offset": 0, 00:14:28.353 "data_size": 65536 00:14:28.353 }, 00:14:28.353 { 00:14:28.353 "name": "BaseBdev2", 00:14:28.353 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:28.353 "is_configured": true, 00:14:28.353 "data_offset": 0, 00:14:28.353 "data_size": 65536 00:14:28.353 } 00:14:28.353 ] 00:14:28.353 }' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.353 16:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.727 "name": "raid_bdev1", 00:14:29.727 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:29.727 "strip_size_kb": 0, 00:14:29.727 "state": "online", 00:14:29.727 "raid_level": "raid1", 00:14:29.727 "superblock": false, 00:14:29.727 "num_base_bdevs": 2, 00:14:29.727 "num_base_bdevs_discovered": 2, 00:14:29.727 "num_base_bdevs_operational": 2, 00:14:29.727 "process": { 00:14:29.727 "type": "rebuild", 00:14:29.727 "target": "spare", 00:14:29.727 "progress": { 00:14:29.727 "blocks": 47104, 00:14:29.727 "percent": 71 00:14:29.727 } 00:14:29.727 }, 00:14:29.727 "base_bdevs_list": [ 00:14:29.727 { 00:14:29.727 "name": "spare", 00:14:29.727 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:29.727 "is_configured": true, 00:14:29.727 "data_offset": 0, 00:14:29.727 "data_size": 65536 00:14:29.727 }, 00:14:29.727 { 00:14:29.727 "name": "BaseBdev2", 00:14:29.727 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:29.727 "is_configured": true, 00:14:29.727 "data_offset": 0, 00:14:29.727 "data_size": 65536 00:14:29.727 } 00:14:29.727 ] 00:14:29.727 }' 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.727 16:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.293 [2024-10-08 16:22:23.495715] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.293 [2024-10-08 16:22:23.495841] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.293 [2024-10-08 16:22:23.495926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.550 "name": "raid_bdev1", 00:14:30.550 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:30.550 "strip_size_kb": 0, 00:14:30.550 "state": "online", 00:14:30.550 "raid_level": "raid1", 00:14:30.550 "superblock": false, 00:14:30.550 "num_base_bdevs": 2, 00:14:30.550 "num_base_bdevs_discovered": 2, 00:14:30.550 "num_base_bdevs_operational": 2, 00:14:30.550 "base_bdevs_list": [ 00:14:30.550 { 00:14:30.550 "name": "spare", 00:14:30.550 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:30.550 "is_configured": true, 00:14:30.550 "data_offset": 0, 00:14:30.550 "data_size": 65536 00:14:30.550 }, 00:14:30.550 { 00:14:30.550 "name": "BaseBdev2", 00:14:30.550 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:30.550 "is_configured": true, 00:14:30.550 "data_offset": 0, 00:14:30.550 "data_size": 65536 00:14:30.550 } 00:14:30.550 ] 00:14:30.550 }' 00:14:30.550 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 16:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.862 "name": "raid_bdev1", 00:14:30.862 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:30.862 "strip_size_kb": 0, 00:14:30.862 "state": "online", 00:14:30.862 "raid_level": "raid1", 00:14:30.862 "superblock": false, 00:14:30.862 "num_base_bdevs": 2, 00:14:30.862 "num_base_bdevs_discovered": 2, 00:14:30.862 "num_base_bdevs_operational": 2, 00:14:30.862 "base_bdevs_list": [ 00:14:30.862 { 00:14:30.862 "name": "spare", 00:14:30.862 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:30.862 "is_configured": true, 00:14:30.862 "data_offset": 0, 00:14:30.862 "data_size": 65536 00:14:30.862 }, 00:14:30.862 { 00:14:30.862 "name": "BaseBdev2", 00:14:30.862 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:30.862 "is_configured": true, 00:14:30.862 "data_offset": 0, 00:14:30.862 "data_size": 65536 00:14:30.862 } 00:14:30.862 ] 00:14:30.862 }' 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.862 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.863 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.126 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.126 "name": "raid_bdev1", 00:14:31.126 "uuid": "451aa8e8-b550-4cc7-a2b8-7fdfc0095e34", 00:14:31.126 "strip_size_kb": 0, 00:14:31.126 "state": "online", 00:14:31.126 "raid_level": "raid1", 00:14:31.126 "superblock": false, 00:14:31.126 "num_base_bdevs": 2, 00:14:31.126 "num_base_bdevs_discovered": 2, 00:14:31.126 "num_base_bdevs_operational": 2, 00:14:31.126 "base_bdevs_list": [ 00:14:31.126 { 00:14:31.126 "name": "spare", 00:14:31.126 "uuid": "65640187-39b0-5526-8d57-28e2b5a2fbd2", 00:14:31.126 "is_configured": true, 00:14:31.126 "data_offset": 0, 00:14:31.126 "data_size": 65536 00:14:31.126 }, 00:14:31.126 { 00:14:31.126 "name": "BaseBdev2", 00:14:31.126 "uuid": "2d6fae69-6ad5-544c-8334-25392b659de2", 00:14:31.126 "is_configured": true, 00:14:31.126 "data_offset": 0, 00:14:31.126 "data_size": 65536 00:14:31.126 } 00:14:31.126 ] 00:14:31.126 }' 00:14:31.126 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.126 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.387 [2024-10-08 16:22:24.653658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.387 [2024-10-08 16:22:24.653905] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.387 [2024-10-08 16:22:24.654132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.387 [2024-10-08 16:22:24.654342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.387 [2024-10-08 16:22:24.654480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.387 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.645 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:31.903 /dev/nbd0 00:14:31.903 16:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.903 1+0 records in 00:14:31.903 1+0 records out 00:14:31.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043764 s, 9.4 MB/s 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.903 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:32.161 /dev/nbd1 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.161 1+0 records in 00:14:32.161 1+0 records out 00:14:32.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373675 s, 11.0 MB/s 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.161 16:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.419 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.677 16:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.936 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.936 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.936 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.936 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.936 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.936 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75823 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75823 ']' 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75823 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75823 00:14:33.194 killing process with pid 75823 00:14:33.194 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.194 00:14:33.194 Latency(us) 00:14:33.194 [2024-10-08T16:22:26.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.194 [2024-10-08T16:22:26.516Z] =================================================================================================================== 00:14:33.194 [2024-10-08T16:22:26.516Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75823' 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75823 00:14:33.194 [2024-10-08 16:22:26.297801] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.194 16:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75823 00:14:33.453 [2024-10-08 16:22:26.575199] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.842 00:14:34.842 real 0m19.294s 00:14:34.842 user 0m21.923s 00:14:34.842 sys 0m3.916s 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.842 ************************************ 00:14:34.842 END TEST raid_rebuild_test 00:14:34.842 ************************************ 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.842 16:22:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:34.842 16:22:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:34.842 16:22:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.842 16:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.842 ************************************ 00:14:34.842 START TEST raid_rebuild_test_sb 00:14:34.842 ************************************ 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76280 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76280 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76280 ']' 00:14:34.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.842 16:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.842 [2024-10-08 16:22:28.046572] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:34.842 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.842 Zero copy mechanism will not be used. 00:14:34.842 [2024-10-08 16:22:28.047732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76280 ] 00:14:35.100 [2024-10-08 16:22:28.225429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.357 [2024-10-08 16:22:28.478980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.615 [2024-10-08 16:22:28.688998] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.615 [2024-10-08 16:22:28.689282] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 BaseBdev1_malloc 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 [2024-10-08 16:22:29.126594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.873 [2024-10-08 16:22:29.126719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.873 [2024-10-08 16:22:29.126754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:35.873 [2024-10-08 16:22:29.126779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.873 [2024-10-08 16:22:29.129990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.873 [2024-10-08 16:22:29.130060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.873 BaseBdev1 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 BaseBdev2_malloc 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 [2024-10-08 16:22:29.188183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:35.873 [2024-10-08 16:22:29.188290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.873 [2024-10-08 16:22:29.188320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:35.873 [2024-10-08 16:22:29.188338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.873 [2024-10-08 16:22:29.191050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.873 [2024-10-08 16:22:29.191118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.873 BaseBdev2 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.873 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 spare_malloc 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 spare_delay 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 [2024-10-08 16:22:29.244647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.131 [2024-10-08 16:22:29.245045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.131 [2024-10-08 16:22:29.245086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:36.131 [2024-10-08 16:22:29.245107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.131 [2024-10-08 16:22:29.247990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.131 [2024-10-08 16:22:29.248060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.131 spare 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 [2024-10-08 16:22:29.252759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.131 [2024-10-08 16:22:29.255086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.131 [2024-10-08 16:22:29.255475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.131 [2024-10-08 16:22:29.255507] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.131 [2024-10-08 16:22:29.255903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:36.131 [2024-10-08 16:22:29.256176] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.131 [2024-10-08 16:22:29.256193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.131 [2024-10-08 16:22:29.256382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.131 "name": "raid_bdev1", 00:14:36.131 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:36.131 "strip_size_kb": 0, 00:14:36.131 "state": "online", 00:14:36.131 "raid_level": "raid1", 00:14:36.131 "superblock": true, 00:14:36.131 "num_base_bdevs": 2, 00:14:36.131 "num_base_bdevs_discovered": 2, 00:14:36.131 "num_base_bdevs_operational": 2, 00:14:36.131 "base_bdevs_list": [ 00:14:36.131 { 00:14:36.131 "name": "BaseBdev1", 00:14:36.131 "uuid": "55240d1f-0251-52f3-b59d-01aa58913d34", 00:14:36.131 "is_configured": true, 00:14:36.131 "data_offset": 2048, 00:14:36.131 "data_size": 63488 00:14:36.131 }, 00:14:36.131 { 00:14:36.131 "name": "BaseBdev2", 00:14:36.131 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:36.131 "is_configured": true, 00:14:36.131 "data_offset": 2048, 00:14:36.131 "data_size": 63488 00:14:36.131 } 00:14:36.131 ] 00:14:36.131 }' 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.131 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.698 [2024-10-08 16:22:29.793318] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.698 16:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:36.956 [2024-10-08 16:22:30.145041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.956 /dev/nbd0 00:14:36.956 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.956 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.956 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:36.956 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:36.956 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.957 1+0 records in 00:14:36.957 1+0 records out 00:14:36.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662337 s, 6.2 MB/s 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:36.957 16:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:43.510 63488+0 records in 00:14:43.510 63488+0 records out 00:14:43.510 32505856 bytes (33 MB, 31 MiB) copied, 6.45946 s, 5.0 MB/s 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.510 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:43.768 [2024-10-08 16:22:36.916093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.768 [2024-10-08 16:22:36.948160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.768 16:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.768 16:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.768 "name": "raid_bdev1", 00:14:43.768 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:43.768 "strip_size_kb": 0, 00:14:43.768 "state": "online", 00:14:43.768 "raid_level": "raid1", 00:14:43.768 "superblock": true, 00:14:43.768 "num_base_bdevs": 2, 00:14:43.768 "num_base_bdevs_discovered": 1, 00:14:43.768 "num_base_bdevs_operational": 1, 00:14:43.768 "base_bdevs_list": [ 00:14:43.768 { 00:14:43.768 "name": null, 00:14:43.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.768 "is_configured": false, 00:14:43.768 "data_offset": 0, 00:14:43.768 "data_size": 63488 00:14:43.768 }, 00:14:43.768 { 00:14:43.768 "name": "BaseBdev2", 00:14:43.768 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:43.768 "is_configured": true, 00:14:43.768 "data_offset": 2048, 00:14:43.768 "data_size": 63488 00:14:43.768 } 00:14:43.768 ] 00:14:43.768 }' 00:14:43.768 16:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.768 16:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.334 16:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.334 16:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.334 16:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.334 [2024-10-08 16:22:37.448335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.334 [2024-10-08 16:22:37.463641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:44.334 16:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.334 16:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:44.334 [2024-10-08 16:22:37.466084] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.266 "name": "raid_bdev1", 00:14:45.266 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:45.266 "strip_size_kb": 0, 00:14:45.266 "state": "online", 00:14:45.266 "raid_level": "raid1", 00:14:45.266 "superblock": true, 00:14:45.266 "num_base_bdevs": 2, 00:14:45.266 "num_base_bdevs_discovered": 2, 00:14:45.266 "num_base_bdevs_operational": 2, 00:14:45.266 "process": { 00:14:45.266 "type": "rebuild", 00:14:45.266 "target": "spare", 00:14:45.266 "progress": { 00:14:45.266 "blocks": 20480, 00:14:45.266 "percent": 32 00:14:45.266 } 00:14:45.266 }, 00:14:45.266 "base_bdevs_list": [ 00:14:45.266 { 00:14:45.266 "name": "spare", 00:14:45.266 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:45.266 "is_configured": true, 00:14:45.266 "data_offset": 2048, 00:14:45.266 "data_size": 63488 00:14:45.266 }, 00:14:45.266 { 00:14:45.266 "name": "BaseBdev2", 00:14:45.266 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:45.266 "is_configured": true, 00:14:45.266 "data_offset": 2048, 00:14:45.266 "data_size": 63488 00:14:45.266 } 00:14:45.266 ] 00:14:45.266 }' 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.266 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.523 [2024-10-08 16:22:38.631191] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.523 [2024-10-08 16:22:38.675155] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.523 [2024-10-08 16:22:38.675242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.523 [2024-10-08 16:22:38.675267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.523 [2024-10-08 16:22:38.675283] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.523 "name": "raid_bdev1", 00:14:45.523 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:45.523 "strip_size_kb": 0, 00:14:45.523 "state": "online", 00:14:45.523 "raid_level": "raid1", 00:14:45.523 "superblock": true, 00:14:45.523 "num_base_bdevs": 2, 00:14:45.523 "num_base_bdevs_discovered": 1, 00:14:45.523 "num_base_bdevs_operational": 1, 00:14:45.523 "base_bdevs_list": [ 00:14:45.523 { 00:14:45.523 "name": null, 00:14:45.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.523 "is_configured": false, 00:14:45.523 "data_offset": 0, 00:14:45.523 "data_size": 63488 00:14:45.523 }, 00:14:45.523 { 00:14:45.523 "name": "BaseBdev2", 00:14:45.523 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:45.523 "is_configured": true, 00:14:45.523 "data_offset": 2048, 00:14:45.523 "data_size": 63488 00:14:45.523 } 00:14:45.523 ] 00:14:45.523 }' 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.523 16:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.089 "name": "raid_bdev1", 00:14:46.089 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:46.089 "strip_size_kb": 0, 00:14:46.089 "state": "online", 00:14:46.089 "raid_level": "raid1", 00:14:46.089 "superblock": true, 00:14:46.089 "num_base_bdevs": 2, 00:14:46.089 "num_base_bdevs_discovered": 1, 00:14:46.089 "num_base_bdevs_operational": 1, 00:14:46.089 "base_bdevs_list": [ 00:14:46.089 { 00:14:46.089 "name": null, 00:14:46.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.089 "is_configured": false, 00:14:46.089 "data_offset": 0, 00:14:46.089 "data_size": 63488 00:14:46.089 }, 00:14:46.089 { 00:14:46.089 "name": "BaseBdev2", 00:14:46.089 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:46.089 "is_configured": true, 00:14:46.089 "data_offset": 2048, 00:14:46.089 "data_size": 63488 00:14:46.089 } 00:14:46.089 ] 00:14:46.089 }' 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.089 [2024-10-08 16:22:39.381416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.089 [2024-10-08 16:22:39.396037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.089 16:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:46.089 [2024-10-08 16:22:39.398618] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.463 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.463 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.463 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.464 "name": "raid_bdev1", 00:14:47.464 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:47.464 "strip_size_kb": 0, 00:14:47.464 "state": "online", 00:14:47.464 "raid_level": "raid1", 00:14:47.464 "superblock": true, 00:14:47.464 "num_base_bdevs": 2, 00:14:47.464 "num_base_bdevs_discovered": 2, 00:14:47.464 "num_base_bdevs_operational": 2, 00:14:47.464 "process": { 00:14:47.464 "type": "rebuild", 00:14:47.464 "target": "spare", 00:14:47.464 "progress": { 00:14:47.464 "blocks": 20480, 00:14:47.464 "percent": 32 00:14:47.464 } 00:14:47.464 }, 00:14:47.464 "base_bdevs_list": [ 00:14:47.464 { 00:14:47.464 "name": "spare", 00:14:47.464 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:47.464 "is_configured": true, 00:14:47.464 "data_offset": 2048, 00:14:47.464 "data_size": 63488 00:14:47.464 }, 00:14:47.464 { 00:14:47.464 "name": "BaseBdev2", 00:14:47.464 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:47.464 "is_configured": true, 00:14:47.464 "data_offset": 2048, 00:14:47.464 "data_size": 63488 00:14:47.464 } 00:14:47.464 ] 00:14:47.464 }' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:47.464 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=430 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.464 "name": "raid_bdev1", 00:14:47.464 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:47.464 "strip_size_kb": 0, 00:14:47.464 "state": "online", 00:14:47.464 "raid_level": "raid1", 00:14:47.464 "superblock": true, 00:14:47.464 "num_base_bdevs": 2, 00:14:47.464 "num_base_bdevs_discovered": 2, 00:14:47.464 "num_base_bdevs_operational": 2, 00:14:47.464 "process": { 00:14:47.464 "type": "rebuild", 00:14:47.464 "target": "spare", 00:14:47.464 "progress": { 00:14:47.464 "blocks": 22528, 00:14:47.464 "percent": 35 00:14:47.464 } 00:14:47.464 }, 00:14:47.464 "base_bdevs_list": [ 00:14:47.464 { 00:14:47.464 "name": "spare", 00:14:47.464 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:47.464 "is_configured": true, 00:14:47.464 "data_offset": 2048, 00:14:47.464 "data_size": 63488 00:14:47.464 }, 00:14:47.464 { 00:14:47.464 "name": "BaseBdev2", 00:14:47.464 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:47.464 "is_configured": true, 00:14:47.464 "data_offset": 2048, 00:14:47.464 "data_size": 63488 00:14:47.464 } 00:14:47.464 ] 00:14:47.464 }' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.464 16:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.837 "name": "raid_bdev1", 00:14:48.837 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:48.837 "strip_size_kb": 0, 00:14:48.837 "state": "online", 00:14:48.837 "raid_level": "raid1", 00:14:48.837 "superblock": true, 00:14:48.837 "num_base_bdevs": 2, 00:14:48.837 "num_base_bdevs_discovered": 2, 00:14:48.837 "num_base_bdevs_operational": 2, 00:14:48.837 "process": { 00:14:48.837 "type": "rebuild", 00:14:48.837 "target": "spare", 00:14:48.837 "progress": { 00:14:48.837 "blocks": 47104, 00:14:48.837 "percent": 74 00:14:48.837 } 00:14:48.837 }, 00:14:48.837 "base_bdevs_list": [ 00:14:48.837 { 00:14:48.837 "name": "spare", 00:14:48.837 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:48.837 "is_configured": true, 00:14:48.837 "data_offset": 2048, 00:14:48.837 "data_size": 63488 00:14:48.837 }, 00:14:48.837 { 00:14:48.837 "name": "BaseBdev2", 00:14:48.837 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:48.837 "is_configured": true, 00:14:48.837 "data_offset": 2048, 00:14:48.837 "data_size": 63488 00:14:48.837 } 00:14:48.837 ] 00:14:48.837 }' 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.837 16:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.403 [2024-10-08 16:22:42.522017] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.403 [2024-10-08 16:22:42.522123] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.403 [2024-10-08 16:22:42.522275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.661 "name": "raid_bdev1", 00:14:49.661 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:49.661 "strip_size_kb": 0, 00:14:49.661 "state": "online", 00:14:49.661 "raid_level": "raid1", 00:14:49.661 "superblock": true, 00:14:49.661 "num_base_bdevs": 2, 00:14:49.661 "num_base_bdevs_discovered": 2, 00:14:49.661 "num_base_bdevs_operational": 2, 00:14:49.661 "base_bdevs_list": [ 00:14:49.661 { 00:14:49.661 "name": "spare", 00:14:49.661 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:49.661 "is_configured": true, 00:14:49.661 "data_offset": 2048, 00:14:49.661 "data_size": 63488 00:14:49.661 }, 00:14:49.661 { 00:14:49.661 "name": "BaseBdev2", 00:14:49.661 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:49.661 "is_configured": true, 00:14:49.661 "data_offset": 2048, 00:14:49.661 "data_size": 63488 00:14:49.661 } 00:14:49.661 ] 00:14:49.661 }' 00:14:49.661 16:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.919 "name": "raid_bdev1", 00:14:49.919 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:49.919 "strip_size_kb": 0, 00:14:49.919 "state": "online", 00:14:49.919 "raid_level": "raid1", 00:14:49.919 "superblock": true, 00:14:49.919 "num_base_bdevs": 2, 00:14:49.919 "num_base_bdevs_discovered": 2, 00:14:49.919 "num_base_bdevs_operational": 2, 00:14:49.919 "base_bdevs_list": [ 00:14:49.919 { 00:14:49.919 "name": "spare", 00:14:49.919 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:49.919 "is_configured": true, 00:14:49.919 "data_offset": 2048, 00:14:49.919 "data_size": 63488 00:14:49.919 }, 00:14:49.919 { 00:14:49.919 "name": "BaseBdev2", 00:14:49.919 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:49.919 "is_configured": true, 00:14:49.919 "data_offset": 2048, 00:14:49.919 "data_size": 63488 00:14:49.919 } 00:14:49.919 ] 00:14:49.919 }' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.919 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.177 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.177 "name": "raid_bdev1", 00:14:50.177 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:50.177 "strip_size_kb": 0, 00:14:50.177 "state": "online", 00:14:50.177 "raid_level": "raid1", 00:14:50.177 "superblock": true, 00:14:50.177 "num_base_bdevs": 2, 00:14:50.177 "num_base_bdevs_discovered": 2, 00:14:50.177 "num_base_bdevs_operational": 2, 00:14:50.177 "base_bdevs_list": [ 00:14:50.177 { 00:14:50.177 "name": "spare", 00:14:50.177 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:50.177 "is_configured": true, 00:14:50.177 "data_offset": 2048, 00:14:50.177 "data_size": 63488 00:14:50.177 }, 00:14:50.177 { 00:14:50.177 "name": "BaseBdev2", 00:14:50.177 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:50.178 "is_configured": true, 00:14:50.178 "data_offset": 2048, 00:14:50.178 "data_size": 63488 00:14:50.178 } 00:14:50.178 ] 00:14:50.178 }' 00:14:50.178 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.178 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.436 [2024-10-08 16:22:43.744227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.436 [2024-10-08 16:22:43.744272] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.436 [2024-10-08 16:22:43.744383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.436 [2024-10-08 16:22:43.744505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.436 [2024-10-08 16:22:43.744547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.436 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.695 16:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:50.954 /dev/nbd0 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.954 1+0 records in 00:14:50.954 1+0 records out 00:14:50.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337479 s, 12.1 MB/s 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.954 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:51.213 /dev/nbd1 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.213 1+0 records in 00:14:51.213 1+0 records out 00:14:51.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315207 s, 13.0 MB/s 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:51.213 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.471 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.730 16:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.988 [2024-10-08 16:22:45.221548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.988 [2024-10-08 16:22:45.221619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.988 [2024-10-08 16:22:45.221655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:51.988 [2024-10-08 16:22:45.221671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.988 [2024-10-08 16:22:45.224607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.988 [2024-10-08 16:22:45.224655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.988 [2024-10-08 16:22:45.224781] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:51.988 [2024-10-08 16:22:45.224864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.988 [2024-10-08 16:22:45.225069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.988 spare 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.988 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.246 [2024-10-08 16:22:45.325194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:52.246 [2024-10-08 16:22:45.325253] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:52.246 [2024-10-08 16:22:45.325692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:52.246 [2024-10-08 16:22:45.325953] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:52.246 [2024-10-08 16:22:45.325982] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:52.246 [2024-10-08 16:22:45.326233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.246 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.247 "name": "raid_bdev1", 00:14:52.247 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:52.247 "strip_size_kb": 0, 00:14:52.247 "state": "online", 00:14:52.247 "raid_level": "raid1", 00:14:52.247 "superblock": true, 00:14:52.247 "num_base_bdevs": 2, 00:14:52.247 "num_base_bdevs_discovered": 2, 00:14:52.247 "num_base_bdevs_operational": 2, 00:14:52.247 "base_bdevs_list": [ 00:14:52.247 { 00:14:52.247 "name": "spare", 00:14:52.247 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:52.247 "is_configured": true, 00:14:52.247 "data_offset": 2048, 00:14:52.247 "data_size": 63488 00:14:52.247 }, 00:14:52.247 { 00:14:52.247 "name": "BaseBdev2", 00:14:52.247 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:52.247 "is_configured": true, 00:14:52.247 "data_offset": 2048, 00:14:52.247 "data_size": 63488 00:14:52.247 } 00:14:52.247 ] 00:14:52.247 }' 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.247 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.814 "name": "raid_bdev1", 00:14:52.814 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:52.814 "strip_size_kb": 0, 00:14:52.814 "state": "online", 00:14:52.814 "raid_level": "raid1", 00:14:52.814 "superblock": true, 00:14:52.814 "num_base_bdevs": 2, 00:14:52.814 "num_base_bdevs_discovered": 2, 00:14:52.814 "num_base_bdevs_operational": 2, 00:14:52.814 "base_bdevs_list": [ 00:14:52.814 { 00:14:52.814 "name": "spare", 00:14:52.814 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:52.814 "is_configured": true, 00:14:52.814 "data_offset": 2048, 00:14:52.814 "data_size": 63488 00:14:52.814 }, 00:14:52.814 { 00:14:52.814 "name": "BaseBdev2", 00:14:52.814 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:52.814 "is_configured": true, 00:14:52.814 "data_offset": 2048, 00:14:52.814 "data_size": 63488 00:14:52.814 } 00:14:52.814 ] 00:14:52.814 }' 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.814 16:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.814 [2024-10-08 16:22:46.058399] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.814 "name": "raid_bdev1", 00:14:52.814 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:52.814 "strip_size_kb": 0, 00:14:52.814 "state": "online", 00:14:52.814 "raid_level": "raid1", 00:14:52.814 "superblock": true, 00:14:52.814 "num_base_bdevs": 2, 00:14:52.814 "num_base_bdevs_discovered": 1, 00:14:52.814 "num_base_bdevs_operational": 1, 00:14:52.814 "base_bdevs_list": [ 00:14:52.814 { 00:14:52.814 "name": null, 00:14:52.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.814 "is_configured": false, 00:14:52.814 "data_offset": 0, 00:14:52.814 "data_size": 63488 00:14:52.814 }, 00:14:52.814 { 00:14:52.814 "name": "BaseBdev2", 00:14:52.814 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:52.814 "is_configured": true, 00:14:52.814 "data_offset": 2048, 00:14:52.814 "data_size": 63488 00:14:52.814 } 00:14:52.814 ] 00:14:52.814 }' 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.814 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.381 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:53.381 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.381 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.381 [2024-10-08 16:22:46.586554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.381 [2024-10-08 16:22:46.586797] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:53.381 [2024-10-08 16:22:46.586831] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:53.381 [2024-10-08 16:22:46.586881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.381 [2024-10-08 16:22:46.601216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:53.381 16:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.381 16:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:53.381 [2024-10-08 16:22:46.603733] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.314 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.573 "name": "raid_bdev1", 00:14:54.573 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:54.573 "strip_size_kb": 0, 00:14:54.573 "state": "online", 00:14:54.573 "raid_level": "raid1", 00:14:54.573 "superblock": true, 00:14:54.573 "num_base_bdevs": 2, 00:14:54.573 "num_base_bdevs_discovered": 2, 00:14:54.573 "num_base_bdevs_operational": 2, 00:14:54.573 "process": { 00:14:54.573 "type": "rebuild", 00:14:54.573 "target": "spare", 00:14:54.573 "progress": { 00:14:54.573 "blocks": 20480, 00:14:54.573 "percent": 32 00:14:54.573 } 00:14:54.573 }, 00:14:54.573 "base_bdevs_list": [ 00:14:54.573 { 00:14:54.573 "name": "spare", 00:14:54.573 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:54.573 "is_configured": true, 00:14:54.573 "data_offset": 2048, 00:14:54.573 "data_size": 63488 00:14:54.573 }, 00:14:54.573 { 00:14:54.573 "name": "BaseBdev2", 00:14:54.573 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:54.573 "is_configured": true, 00:14:54.573 "data_offset": 2048, 00:14:54.573 "data_size": 63488 00:14:54.573 } 00:14:54.573 ] 00:14:54.573 }' 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 [2024-10-08 16:22:47.781147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.573 [2024-10-08 16:22:47.813026] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:54.573 [2024-10-08 16:22:47.813120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.573 [2024-10-08 16:22:47.813145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.573 [2024-10-08 16:22:47.813162] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.831 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.831 "name": "raid_bdev1", 00:14:54.831 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:54.831 "strip_size_kb": 0, 00:14:54.831 "state": "online", 00:14:54.831 "raid_level": "raid1", 00:14:54.831 "superblock": true, 00:14:54.831 "num_base_bdevs": 2, 00:14:54.831 "num_base_bdevs_discovered": 1, 00:14:54.831 "num_base_bdevs_operational": 1, 00:14:54.831 "base_bdevs_list": [ 00:14:54.831 { 00:14:54.831 "name": null, 00:14:54.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.831 "is_configured": false, 00:14:54.831 "data_offset": 0, 00:14:54.831 "data_size": 63488 00:14:54.831 }, 00:14:54.831 { 00:14:54.831 "name": "BaseBdev2", 00:14:54.831 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:54.831 "is_configured": true, 00:14:54.831 "data_offset": 2048, 00:14:54.831 "data_size": 63488 00:14:54.831 } 00:14:54.831 ] 00:14:54.831 }' 00:14:54.831 16:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.831 16:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.091 16:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.091 16:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.091 16:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.091 [2024-10-08 16:22:48.355187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.091 [2024-10-08 16:22:48.355278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.091 [2024-10-08 16:22:48.355310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:55.091 [2024-10-08 16:22:48.355330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.091 [2024-10-08 16:22:48.356003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.091 [2024-10-08 16:22:48.356056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.091 [2024-10-08 16:22:48.356176] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:55.091 [2024-10-08 16:22:48.356202] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:55.091 [2024-10-08 16:22:48.356217] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:55.091 [2024-10-08 16:22:48.356249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.091 [2024-10-08 16:22:48.370762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:55.091 spare 00:14:55.091 16:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.091 16:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:55.091 [2024-10-08 16:22:48.373280] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.462 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.462 "name": "raid_bdev1", 00:14:56.462 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:56.462 "strip_size_kb": 0, 00:14:56.462 "state": "online", 00:14:56.462 "raid_level": "raid1", 00:14:56.462 "superblock": true, 00:14:56.463 "num_base_bdevs": 2, 00:14:56.463 "num_base_bdevs_discovered": 2, 00:14:56.463 "num_base_bdevs_operational": 2, 00:14:56.463 "process": { 00:14:56.463 "type": "rebuild", 00:14:56.463 "target": "spare", 00:14:56.463 "progress": { 00:14:56.463 "blocks": 20480, 00:14:56.463 "percent": 32 00:14:56.463 } 00:14:56.463 }, 00:14:56.463 "base_bdevs_list": [ 00:14:56.463 { 00:14:56.463 "name": "spare", 00:14:56.463 "uuid": "07f467ff-50b3-55ad-a874-c2c741e19d7c", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 2048, 00:14:56.463 "data_size": 63488 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev2", 00:14:56.463 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 2048, 00:14:56.463 "data_size": 63488 00:14:56.463 } 00:14:56.463 ] 00:14:56.463 }' 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 [2024-10-08 16:22:49.527426] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.463 [2024-10-08 16:22:49.582758] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:56.463 [2024-10-08 16:22:49.582876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.463 [2024-10-08 16:22:49.582906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.463 [2024-10-08 16:22:49.582920] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.463 "name": "raid_bdev1", 00:14:56.463 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:56.463 "strip_size_kb": 0, 00:14:56.463 "state": "online", 00:14:56.463 "raid_level": "raid1", 00:14:56.463 "superblock": true, 00:14:56.463 "num_base_bdevs": 2, 00:14:56.463 "num_base_bdevs_discovered": 1, 00:14:56.463 "num_base_bdevs_operational": 1, 00:14:56.463 "base_bdevs_list": [ 00:14:56.463 { 00:14:56.463 "name": null, 00:14:56.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.463 "is_configured": false, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 63488 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev2", 00:14:56.463 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 2048, 00:14:56.463 "data_size": 63488 00:14:56.463 } 00:14:56.463 ] 00:14:56.463 }' 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.463 16:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.028 "name": "raid_bdev1", 00:14:57.028 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:57.028 "strip_size_kb": 0, 00:14:57.028 "state": "online", 00:14:57.028 "raid_level": "raid1", 00:14:57.028 "superblock": true, 00:14:57.028 "num_base_bdevs": 2, 00:14:57.028 "num_base_bdevs_discovered": 1, 00:14:57.028 "num_base_bdevs_operational": 1, 00:14:57.028 "base_bdevs_list": [ 00:14:57.028 { 00:14:57.028 "name": null, 00:14:57.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.028 "is_configured": false, 00:14:57.028 "data_offset": 0, 00:14:57.028 "data_size": 63488 00:14:57.028 }, 00:14:57.028 { 00:14:57.028 "name": "BaseBdev2", 00:14:57.028 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:57.028 "is_configured": true, 00:14:57.028 "data_offset": 2048, 00:14:57.028 "data_size": 63488 00:14:57.028 } 00:14:57.028 ] 00:14:57.028 }' 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 [2024-10-08 16:22:50.328810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:57.028 [2024-10-08 16:22:50.328897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.028 [2024-10-08 16:22:50.328951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:57.028 [2024-10-08 16:22:50.328990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.028 [2024-10-08 16:22:50.329811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.028 [2024-10-08 16:22:50.329866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.028 [2024-10-08 16:22:50.330039] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:57.028 [2024-10-08 16:22:50.330080] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:57.028 [2024-10-08 16:22:50.330124] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:57.028 [2024-10-08 16:22:50.330157] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:57.028 BaseBdev1 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.028 16:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.402 "name": "raid_bdev1", 00:14:58.402 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:58.402 "strip_size_kb": 0, 00:14:58.402 "state": "online", 00:14:58.402 "raid_level": "raid1", 00:14:58.402 "superblock": true, 00:14:58.402 "num_base_bdevs": 2, 00:14:58.402 "num_base_bdevs_discovered": 1, 00:14:58.402 "num_base_bdevs_operational": 1, 00:14:58.402 "base_bdevs_list": [ 00:14:58.402 { 00:14:58.402 "name": null, 00:14:58.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.402 "is_configured": false, 00:14:58.402 "data_offset": 0, 00:14:58.402 "data_size": 63488 00:14:58.402 }, 00:14:58.402 { 00:14:58.402 "name": "BaseBdev2", 00:14:58.402 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:58.402 "is_configured": true, 00:14:58.402 "data_offset": 2048, 00:14:58.402 "data_size": 63488 00:14:58.402 } 00:14:58.402 ] 00:14:58.402 }' 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.402 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.660 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.660 "name": "raid_bdev1", 00:14:58.660 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:58.660 "strip_size_kb": 0, 00:14:58.660 "state": "online", 00:14:58.660 "raid_level": "raid1", 00:14:58.660 "superblock": true, 00:14:58.660 "num_base_bdevs": 2, 00:14:58.660 "num_base_bdevs_discovered": 1, 00:14:58.660 "num_base_bdevs_operational": 1, 00:14:58.660 "base_bdevs_list": [ 00:14:58.660 { 00:14:58.660 "name": null, 00:14:58.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.660 "is_configured": false, 00:14:58.660 "data_offset": 0, 00:14:58.660 "data_size": 63488 00:14:58.660 }, 00:14:58.660 { 00:14:58.660 "name": "BaseBdev2", 00:14:58.661 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:58.661 "is_configured": true, 00:14:58.661 "data_offset": 2048, 00:14:58.661 "data_size": 63488 00:14:58.661 } 00:14:58.661 ] 00:14:58.661 }' 00:14:58.661 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.661 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.959 16:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.959 [2024-10-08 16:22:52.045342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.959 [2024-10-08 16:22:52.045625] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:58.959 [2024-10-08 16:22:52.045672] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:58.959 request: 00:14:58.959 { 00:14:58.959 "base_bdev": "BaseBdev1", 00:14:58.959 "raid_bdev": "raid_bdev1", 00:14:58.959 "method": "bdev_raid_add_base_bdev", 00:14:58.959 "req_id": 1 00:14:58.959 } 00:14:58.959 Got JSON-RPC error response 00:14:58.959 response: 00:14:58.959 { 00:14:58.959 "code": -22, 00:14:58.959 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:58.959 } 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:58.959 16:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.893 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.893 "name": "raid_bdev1", 00:14:59.893 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:14:59.893 "strip_size_kb": 0, 00:14:59.893 "state": "online", 00:14:59.893 "raid_level": "raid1", 00:14:59.893 "superblock": true, 00:14:59.893 "num_base_bdevs": 2, 00:14:59.893 "num_base_bdevs_discovered": 1, 00:14:59.893 "num_base_bdevs_operational": 1, 00:14:59.893 "base_bdevs_list": [ 00:14:59.893 { 00:14:59.893 "name": null, 00:14:59.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.893 "is_configured": false, 00:14:59.894 "data_offset": 0, 00:14:59.894 "data_size": 63488 00:14:59.894 }, 00:14:59.894 { 00:14:59.894 "name": "BaseBdev2", 00:14:59.894 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:14:59.894 "is_configured": true, 00:14:59.894 "data_offset": 2048, 00:14:59.894 "data_size": 63488 00:14:59.894 } 00:14:59.894 ] 00:14:59.894 }' 00:14:59.894 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.894 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.460 "name": "raid_bdev1", 00:15:00.460 "uuid": "997c2b6d-122d-425c-ab00-402b20774e41", 00:15:00.460 "strip_size_kb": 0, 00:15:00.460 "state": "online", 00:15:00.460 "raid_level": "raid1", 00:15:00.460 "superblock": true, 00:15:00.460 "num_base_bdevs": 2, 00:15:00.460 "num_base_bdevs_discovered": 1, 00:15:00.460 "num_base_bdevs_operational": 1, 00:15:00.460 "base_bdevs_list": [ 00:15:00.460 { 00:15:00.460 "name": null, 00:15:00.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.460 "is_configured": false, 00:15:00.460 "data_offset": 0, 00:15:00.460 "data_size": 63488 00:15:00.460 }, 00:15:00.460 { 00:15:00.460 "name": "BaseBdev2", 00:15:00.460 "uuid": "a106da5d-dc8b-54df-86be-fa3db33e7f8b", 00:15:00.460 "is_configured": true, 00:15:00.460 "data_offset": 2048, 00:15:00.460 "data_size": 63488 00:15:00.460 } 00:15:00.460 ] 00:15:00.460 }' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76280 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76280 ']' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 76280 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76280 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76280' 00:15:00.460 killing process with pid 76280 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 76280 00:15:00.460 Received shutdown signal, test time was about 60.000000 seconds 00:15:00.460 00:15:00.460 Latency(us) 00:15:00.460 [2024-10-08T16:22:53.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.460 [2024-10-08T16:22:53.782Z] =================================================================================================================== 00:15:00.460 [2024-10-08T16:22:53.782Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.460 [2024-10-08 16:22:53.777560] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.460 16:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 76280 00:15:00.460 [2024-10-08 16:22:53.777860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.460 [2024-10-08 16:22:53.777979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.460 [2024-10-08 16:22:53.778030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:01.026 [2024-10-08 16:22:54.049724] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.961 16:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:01.961 00:15:01.961 real 0m27.328s 00:15:01.961 user 0m33.414s 00:15:01.961 sys 0m4.180s 00:15:01.961 16:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.961 16:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.961 ************************************ 00:15:01.961 END TEST raid_rebuild_test_sb 00:15:01.961 ************************************ 00:15:02.221 16:22:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:02.221 16:22:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:02.221 16:22:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.221 16:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.221 ************************************ 00:15:02.221 START TEST raid_rebuild_test_io 00:15:02.221 ************************************ 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77046 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77046 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 77046 ']' 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.221 16:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.221 [2024-10-08 16:22:55.418329] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:15:02.221 [2024-10-08 16:22:55.418503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77046 ] 00:15:02.221 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:02.221 Zero copy mechanism will not be used. 00:15:02.480 [2024-10-08 16:22:55.587974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.738 [2024-10-08 16:22:55.847946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.738 [2024-10-08 16:22:56.050116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.738 [2024-10-08 16:22:56.050183] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 BaseBdev1_malloc 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 [2024-10-08 16:22:56.511691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.341 [2024-10-08 16:22:56.511772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.341 [2024-10-08 16:22:56.511806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.341 [2024-10-08 16:22:56.511831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.341 [2024-10-08 16:22:56.514737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.341 [2024-10-08 16:22:56.514786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.341 BaseBdev1 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 BaseBdev2_malloc 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 [2024-10-08 16:22:56.580495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:03.341 [2024-10-08 16:22:56.580604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.341 [2024-10-08 16:22:56.580638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.341 [2024-10-08 16:22:56.580660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.341 [2024-10-08 16:22:56.583546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.341 [2024-10-08 16:22:56.583592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.341 BaseBdev2 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 spare_malloc 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 spare_delay 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 [2024-10-08 16:22:56.647379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.341 [2024-10-08 16:22:56.647462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.341 [2024-10-08 16:22:56.647493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:03.341 [2024-10-08 16:22:56.647513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.341 [2024-10-08 16:22:56.650318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.341 [2024-10-08 16:22:56.650367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.341 spare 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 [2024-10-08 16:22:56.655433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.341 [2024-10-08 16:22:56.657932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.341 [2024-10-08 16:22:56.658060] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.341 [2024-10-08 16:22:56.658084] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:03.341 [2024-10-08 16:22:56.658431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:03.341 [2024-10-08 16:22:56.658674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.341 [2024-10-08 16:22:56.658699] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.341 [2024-10-08 16:22:56.658897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.341 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.600 "name": "raid_bdev1", 00:15:03.600 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:03.600 "strip_size_kb": 0, 00:15:03.600 "state": "online", 00:15:03.600 "raid_level": "raid1", 00:15:03.600 "superblock": false, 00:15:03.600 "num_base_bdevs": 2, 00:15:03.600 "num_base_bdevs_discovered": 2, 00:15:03.600 "num_base_bdevs_operational": 2, 00:15:03.600 "base_bdevs_list": [ 00:15:03.600 { 00:15:03.600 "name": "BaseBdev1", 00:15:03.600 "uuid": "c839542c-8bbf-5365-80c9-6bc649901092", 00:15:03.600 "is_configured": true, 00:15:03.600 "data_offset": 0, 00:15:03.600 "data_size": 65536 00:15:03.600 }, 00:15:03.600 { 00:15:03.600 "name": "BaseBdev2", 00:15:03.600 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:03.600 "is_configured": true, 00:15:03.600 "data_offset": 0, 00:15:03.600 "data_size": 65536 00:15:03.600 } 00:15:03.600 ] 00:15:03.600 }' 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.600 16:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.858 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:03.858 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.858 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.858 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.858 [2024-10-08 16:22:57.155942] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.858 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 [2024-10-08 16:22:57.247582] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.117 "name": "raid_bdev1", 00:15:04.117 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:04.117 "strip_size_kb": 0, 00:15:04.117 "state": "online", 00:15:04.117 "raid_level": "raid1", 00:15:04.117 "superblock": false, 00:15:04.117 "num_base_bdevs": 2, 00:15:04.117 "num_base_bdevs_discovered": 1, 00:15:04.117 "num_base_bdevs_operational": 1, 00:15:04.117 "base_bdevs_list": [ 00:15:04.117 { 00:15:04.117 "name": null, 00:15:04.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.117 "is_configured": false, 00:15:04.117 "data_offset": 0, 00:15:04.117 "data_size": 65536 00:15:04.117 }, 00:15:04.117 { 00:15:04.117 "name": "BaseBdev2", 00:15:04.117 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:04.117 "is_configured": true, 00:15:04.117 "data_offset": 0, 00:15:04.117 "data_size": 65536 00:15:04.117 } 00:15:04.117 ] 00:15:04.117 }' 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.117 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 [2024-10-08 16:22:57.375803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:04.117 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:04.117 Zero copy mechanism will not be used. 00:15:04.117 Running I/O for 60 seconds... 00:15:04.684 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.684 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.684 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.684 [2024-10-08 16:22:57.785698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.684 16:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.684 16:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.684 [2024-10-08 16:22:57.832649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:04.684 [2024-10-08 16:22:57.835235] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.684 [2024-10-08 16:22:57.952924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:04.684 [2024-10-08 16:22:57.953624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:04.941 [2024-10-08 16:22:58.165453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:04.942 [2024-10-08 16:22:58.165876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:05.206 168.00 IOPS, 504.00 MiB/s [2024-10-08T16:22:58.528Z] [2024-10-08 16:22:58.397222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:05.206 [2024-10-08 16:22:58.397933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:05.465 [2024-10-08 16:22:58.630643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.723 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.723 "name": "raid_bdev1", 00:15:05.723 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:05.723 "strip_size_kb": 0, 00:15:05.723 "state": "online", 00:15:05.723 "raid_level": "raid1", 00:15:05.723 "superblock": false, 00:15:05.723 "num_base_bdevs": 2, 00:15:05.723 "num_base_bdevs_discovered": 2, 00:15:05.723 "num_base_bdevs_operational": 2, 00:15:05.723 "process": { 00:15:05.723 "type": "rebuild", 00:15:05.723 "target": "spare", 00:15:05.723 "progress": { 00:15:05.723 "blocks": 10240, 00:15:05.723 "percent": 15 00:15:05.723 } 00:15:05.723 }, 00:15:05.723 "base_bdevs_list": [ 00:15:05.723 { 00:15:05.723 "name": "spare", 00:15:05.724 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:05.724 "is_configured": true, 00:15:05.724 "data_offset": 0, 00:15:05.724 "data_size": 65536 00:15:05.724 }, 00:15:05.724 { 00:15:05.724 "name": "BaseBdev2", 00:15:05.724 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:05.724 "is_configured": true, 00:15:05.724 "data_offset": 0, 00:15:05.724 "data_size": 65536 00:15:05.724 } 00:15:05.724 ] 00:15:05.724 }' 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.724 [2024-10-08 16:22:58.969407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.724 16:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.724 [2024-10-08 16:22:58.981579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.983 [2024-10-08 16:22:59.090352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:05.983 [2024-10-08 16:22:59.090806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:05.983 [2024-10-08 16:22:59.092612] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.983 [2024-10-08 16:22:59.111197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.983 [2024-10-08 16:22:59.111290] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.983 [2024-10-08 16:22:59.111318] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.983 [2024-10-08 16:22:59.152169] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.983 "name": "raid_bdev1", 00:15:05.983 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:05.983 "strip_size_kb": 0, 00:15:05.983 "state": "online", 00:15:05.983 "raid_level": "raid1", 00:15:05.983 "superblock": false, 00:15:05.983 "num_base_bdevs": 2, 00:15:05.983 "num_base_bdevs_discovered": 1, 00:15:05.983 "num_base_bdevs_operational": 1, 00:15:05.983 "base_bdevs_list": [ 00:15:05.983 { 00:15:05.983 "name": null, 00:15:05.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.983 "is_configured": false, 00:15:05.983 "data_offset": 0, 00:15:05.983 "data_size": 65536 00:15:05.983 }, 00:15:05.983 { 00:15:05.983 "name": "BaseBdev2", 00:15:05.983 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:05.983 "is_configured": true, 00:15:05.983 "data_offset": 0, 00:15:05.983 "data_size": 65536 00:15:05.983 } 00:15:05.983 ] 00:15:05.983 }' 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.983 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.501 145.50 IOPS, 436.50 MiB/s [2024-10-08T16:22:59.823Z] 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.501 "name": "raid_bdev1", 00:15:06.501 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:06.501 "strip_size_kb": 0, 00:15:06.501 "state": "online", 00:15:06.501 "raid_level": "raid1", 00:15:06.501 "superblock": false, 00:15:06.501 "num_base_bdevs": 2, 00:15:06.501 "num_base_bdevs_discovered": 1, 00:15:06.501 "num_base_bdevs_operational": 1, 00:15:06.501 "base_bdevs_list": [ 00:15:06.501 { 00:15:06.501 "name": null, 00:15:06.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.501 "is_configured": false, 00:15:06.501 "data_offset": 0, 00:15:06.501 "data_size": 65536 00:15:06.501 }, 00:15:06.501 { 00:15:06.501 "name": "BaseBdev2", 00:15:06.501 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:06.501 "is_configured": true, 00:15:06.501 "data_offset": 0, 00:15:06.501 "data_size": 65536 00:15:06.501 } 00:15:06.501 ] 00:15:06.501 }' 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.501 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.760 [2024-10-08 16:22:59.827579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.760 16:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.760 16:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.760 [2024-10-08 16:22:59.905170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:06.760 [2024-10-08 16:22:59.907726] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.760 [2024-10-08 16:23:00.017288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.760 [2024-10-08 16:23:00.018009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:07.019 [2024-10-08 16:23:00.137490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:07.019 [2024-10-08 16:23:00.137893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:07.277 156.33 IOPS, 469.00 MiB/s [2024-10-08T16:23:00.599Z] [2024-10-08 16:23:00.418658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:07.277 [2024-10-08 16:23:00.535443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:07.843 [2024-10-08 16:23:00.875249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:07.843 [2024-10-08 16:23:00.875960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.843 "name": "raid_bdev1", 00:15:07.843 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:07.843 "strip_size_kb": 0, 00:15:07.843 "state": "online", 00:15:07.843 "raid_level": "raid1", 00:15:07.843 "superblock": false, 00:15:07.843 "num_base_bdevs": 2, 00:15:07.843 "num_base_bdevs_discovered": 2, 00:15:07.843 "num_base_bdevs_operational": 2, 00:15:07.843 "process": { 00:15:07.843 "type": "rebuild", 00:15:07.843 "target": "spare", 00:15:07.843 "progress": { 00:15:07.843 "blocks": 14336, 00:15:07.843 "percent": 21 00:15:07.843 } 00:15:07.843 }, 00:15:07.843 "base_bdevs_list": [ 00:15:07.843 { 00:15:07.843 "name": "spare", 00:15:07.843 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:07.843 "is_configured": true, 00:15:07.843 "data_offset": 0, 00:15:07.843 "data_size": 65536 00:15:07.843 }, 00:15:07.843 { 00:15:07.843 "name": "BaseBdev2", 00:15:07.843 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:07.843 "is_configured": true, 00:15:07.843 "data_offset": 0, 00:15:07.843 "data_size": 65536 00:15:07.843 } 00:15:07.843 ] 00:15:07.843 }' 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.843 16:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=451 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.843 [2024-10-08 16:23:01.080204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:07.843 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.843 "name": "raid_bdev1", 00:15:07.843 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:07.843 "strip_size_kb": 0, 00:15:07.843 "state": "online", 00:15:07.843 "raid_level": "raid1", 00:15:07.843 "superblock": false, 00:15:07.843 "num_base_bdevs": 2, 00:15:07.843 "num_base_bdevs_discovered": 2, 00:15:07.843 "num_base_bdevs_operational": 2, 00:15:07.843 "process": { 00:15:07.843 "type": "rebuild", 00:15:07.843 "target": "spare", 00:15:07.843 "progress": { 00:15:07.843 "blocks": 14336, 00:15:07.843 "percent": 21 00:15:07.843 } 00:15:07.843 }, 00:15:07.843 "base_bdevs_list": [ 00:15:07.843 { 00:15:07.843 "name": "spare", 00:15:07.843 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:07.844 "is_configured": true, 00:15:07.844 "data_offset": 0, 00:15:07.844 "data_size": 65536 00:15:07.844 }, 00:15:07.844 { 00:15:07.844 "name": "BaseBdev2", 00:15:07.844 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:07.844 "is_configured": true, 00:15:07.844 "data_offset": 0, 00:15:07.844 "data_size": 65536 00:15:07.844 } 00:15:07.844 ] 00:15:07.844 }' 00:15:07.844 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.844 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.844 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.108 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.108 16:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.108 [2024-10-08 16:23:01.310374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:08.674 138.75 IOPS, 416.25 MiB/s [2024-10-08T16:23:01.996Z] [2024-10-08 16:23:01.740488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:08.674 [2024-10-08 16:23:01.969798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.932 "name": "raid_bdev1", 00:15:08.932 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:08.932 "strip_size_kb": 0, 00:15:08.932 "state": "online", 00:15:08.932 "raid_level": "raid1", 00:15:08.932 "superblock": false, 00:15:08.932 "num_base_bdevs": 2, 00:15:08.932 "num_base_bdevs_discovered": 2, 00:15:08.932 "num_base_bdevs_operational": 2, 00:15:08.932 "process": { 00:15:08.932 "type": "rebuild", 00:15:08.932 "target": "spare", 00:15:08.932 "progress": { 00:15:08.932 "blocks": 34816, 00:15:08.932 "percent": 53 00:15:08.932 } 00:15:08.932 }, 00:15:08.932 "base_bdevs_list": [ 00:15:08.932 { 00:15:08.932 "name": "spare", 00:15:08.932 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:08.932 "is_configured": true, 00:15:08.932 "data_offset": 0, 00:15:08.932 "data_size": 65536 00:15:08.932 }, 00:15:08.932 { 00:15:08.932 "name": "BaseBdev2", 00:15:08.932 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:08.932 "is_configured": true, 00:15:08.932 "data_offset": 0, 00:15:08.932 "data_size": 65536 00:15:08.932 } 00:15:08.932 ] 00:15:08.932 }' 00:15:08.932 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.190 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.190 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.190 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.190 16:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.190 123.40 IOPS, 370.20 MiB/s [2024-10-08T16:23:02.512Z] [2024-10-08 16:23:02.421400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.124 108.67 IOPS, 326.00 MiB/s [2024-10-08T16:23:03.446Z] [2024-10-08 16:23:03.395824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.124 "name": "raid_bdev1", 00:15:10.124 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:10.124 "strip_size_kb": 0, 00:15:10.124 "state": "online", 00:15:10.124 "raid_level": "raid1", 00:15:10.124 "superblock": false, 00:15:10.124 "num_base_bdevs": 2, 00:15:10.124 "num_base_bdevs_discovered": 2, 00:15:10.124 "num_base_bdevs_operational": 2, 00:15:10.124 "process": { 00:15:10.124 "type": "rebuild", 00:15:10.124 "target": "spare", 00:15:10.124 "progress": { 00:15:10.124 "blocks": 55296, 00:15:10.124 "percent": 84 00:15:10.124 } 00:15:10.124 }, 00:15:10.124 "base_bdevs_list": [ 00:15:10.124 { 00:15:10.124 "name": "spare", 00:15:10.124 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:10.124 "is_configured": true, 00:15:10.124 "data_offset": 0, 00:15:10.124 "data_size": 65536 00:15:10.124 }, 00:15:10.124 { 00:15:10.124 "name": "BaseBdev2", 00:15:10.124 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:10.124 "is_configured": true, 00:15:10.124 "data_offset": 0, 00:15:10.124 "data_size": 65536 00:15:10.124 } 00:15:10.124 ] 00:15:10.124 }' 00:15:10.124 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.384 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.384 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.384 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.384 16:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.652 [2024-10-08 16:23:03.846394] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:10.652 [2024-10-08 16:23:03.946454] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:10.652 [2024-10-08 16:23:03.957284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.216 98.71 IOPS, 296.14 MiB/s [2024-10-08T16:23:04.538Z] 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.216 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.473 "name": "raid_bdev1", 00:15:11.473 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:11.473 "strip_size_kb": 0, 00:15:11.473 "state": "online", 00:15:11.473 "raid_level": "raid1", 00:15:11.473 "superblock": false, 00:15:11.473 "num_base_bdevs": 2, 00:15:11.473 "num_base_bdevs_discovered": 2, 00:15:11.473 "num_base_bdevs_operational": 2, 00:15:11.473 "base_bdevs_list": [ 00:15:11.473 { 00:15:11.473 "name": "spare", 00:15:11.473 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:11.473 "is_configured": true, 00:15:11.473 "data_offset": 0, 00:15:11.473 "data_size": 65536 00:15:11.473 }, 00:15:11.473 { 00:15:11.473 "name": "BaseBdev2", 00:15:11.473 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:11.473 "is_configured": true, 00:15:11.473 "data_offset": 0, 00:15:11.473 "data_size": 65536 00:15:11.473 } 00:15:11.473 ] 00:15:11.473 }' 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.473 "name": "raid_bdev1", 00:15:11.473 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:11.473 "strip_size_kb": 0, 00:15:11.473 "state": "online", 00:15:11.473 "raid_level": "raid1", 00:15:11.473 "superblock": false, 00:15:11.473 "num_base_bdevs": 2, 00:15:11.473 "num_base_bdevs_discovered": 2, 00:15:11.473 "num_base_bdevs_operational": 2, 00:15:11.473 "base_bdevs_list": [ 00:15:11.473 { 00:15:11.473 "name": "spare", 00:15:11.473 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:11.473 "is_configured": true, 00:15:11.473 "data_offset": 0, 00:15:11.473 "data_size": 65536 00:15:11.473 }, 00:15:11.473 { 00:15:11.473 "name": "BaseBdev2", 00:15:11.473 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:11.473 "is_configured": true, 00:15:11.473 "data_offset": 0, 00:15:11.473 "data_size": 65536 00:15:11.473 } 00:15:11.473 ] 00:15:11.473 }' 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.473 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.732 "name": "raid_bdev1", 00:15:11.732 "uuid": "7440a7a2-0598-43fc-b467-47adbc225321", 00:15:11.732 "strip_size_kb": 0, 00:15:11.732 "state": "online", 00:15:11.732 "raid_level": "raid1", 00:15:11.732 "superblock": false, 00:15:11.732 "num_base_bdevs": 2, 00:15:11.732 "num_base_bdevs_discovered": 2, 00:15:11.732 "num_base_bdevs_operational": 2, 00:15:11.732 "base_bdevs_list": [ 00:15:11.732 { 00:15:11.732 "name": "spare", 00:15:11.732 "uuid": "99623d40-4dad-5a10-906c-75cd8b5cc6f5", 00:15:11.732 "is_configured": true, 00:15:11.732 "data_offset": 0, 00:15:11.732 "data_size": 65536 00:15:11.732 }, 00:15:11.732 { 00:15:11.732 "name": "BaseBdev2", 00:15:11.732 "uuid": "e4077a6c-8c2c-5857-8604-36a5a8a64267", 00:15:11.732 "is_configured": true, 00:15:11.732 "data_offset": 0, 00:15:11.732 "data_size": 65536 00:15:11.732 } 00:15:11.732 ] 00:15:11.732 }' 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.732 16:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.299 [2024-10-08 16:23:05.376110] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.299 [2024-10-08 16:23:05.376152] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.299 91.00 IOPS, 273.00 MiB/s 00:15:12.299 Latency(us) 00:15:12.299 [2024-10-08T16:23:05.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.299 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:12.299 raid_bdev1 : 8.02 91.00 273.01 0.00 0.00 14505.44 297.89 111530.36 00:15:12.299 [2024-10-08T16:23:05.621Z] =================================================================================================================== 00:15:12.299 [2024-10-08T16:23:05.621Z] Total : 91.00 273.01 0.00 0.00 14505.44 297.89 111530.36 00:15:12.299 [2024-10-08 16:23:05.419910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.299 [2024-10-08 16:23:05.419977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.299 [2024-10-08 16:23:05.420097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.299 [2024-10-08 16:23:05.420116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:12.299 { 00:15:12.299 "results": [ 00:15:12.299 { 00:15:12.299 "job": "raid_bdev1", 00:15:12.299 "core_mask": "0x1", 00:15:12.299 "workload": "randrw", 00:15:12.299 "percentage": 50, 00:15:12.299 "status": "finished", 00:15:12.299 "queue_depth": 2, 00:15:12.299 "io_size": 3145728, 00:15:12.299 "runtime": 8.021544, 00:15:12.299 "iops": 91.00492374036719, 00:15:12.299 "mibps": 273.01477122110157, 00:15:12.299 "io_failed": 0, 00:15:12.299 "io_timeout": 0, 00:15:12.299 "avg_latency_us": 14505.436931506849, 00:15:12.299 "min_latency_us": 297.8909090909091, 00:15:12.299 "max_latency_us": 111530.35636363637 00:15:12.299 } 00:15:12.299 ], 00:15:12.299 "core_count": 1 00:15:12.299 } 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.299 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:12.557 /dev/nbd0 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:12.557 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.558 1+0 records in 00:15:12.558 1+0 records out 00:15:12.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390391 s, 10.5 MB/s 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.558 16:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:12.816 /dev/nbd1 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.074 1+0 records in 00:15:13.074 1+0 records out 00:15:13.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031876 s, 12.8 MB/s 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:13.074 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.075 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.641 16:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77046 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 77046 ']' 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 77046 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77046 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.899 killing process with pid 77046 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77046' 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 77046 00:15:13.899 Received shutdown signal, test time was about 9.721461 seconds 00:15:13.899 00:15:13.899 Latency(us) 00:15:13.899 [2024-10-08T16:23:07.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.899 [2024-10-08T16:23:07.221Z] =================================================================================================================== 00:15:13.899 [2024-10-08T16:23:07.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.899 [2024-10-08 16:23:07.099875] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.899 16:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 77046 00:15:14.157 [2024-10-08 16:23:07.305685] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:15.526 00:15:15.526 real 0m13.252s 00:15:15.526 user 0m17.348s 00:15:15.526 sys 0m1.487s 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.526 ************************************ 00:15:15.526 END TEST raid_rebuild_test_io 00:15:15.526 ************************************ 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.526 16:23:08 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:15.526 16:23:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:15.526 16:23:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.526 16:23:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.526 ************************************ 00:15:15.526 START TEST raid_rebuild_test_sb_io 00:15:15.526 ************************************ 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:15.526 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77428 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77428 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77428 ']' 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:15.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.527 16:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.527 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:15.527 Zero copy mechanism will not be used. 00:15:15.527 [2024-10-08 16:23:08.744467] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:15:15.527 [2024-10-08 16:23:08.744670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77428 ] 00:15:15.785 [2024-10-08 16:23:08.930085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.043 [2024-10-08 16:23:09.287597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.300 [2024-10-08 16:23:09.505060] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.300 [2024-10-08 16:23:09.505115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.558 BaseBdev1_malloc 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.558 [2024-10-08 16:23:09.806636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:16.558 [2024-10-08 16:23:09.806726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.558 [2024-10-08 16:23:09.806764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.558 [2024-10-08 16:23:09.806789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.558 [2024-10-08 16:23:09.809766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.558 [2024-10-08 16:23:09.809816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.558 BaseBdev1 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.558 BaseBdev2_malloc 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.558 [2024-10-08 16:23:09.871294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:16.558 [2024-10-08 16:23:09.871399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.558 [2024-10-08 16:23:09.871432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.558 [2024-10-08 16:23:09.871451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.558 [2024-10-08 16:23:09.874496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.558 [2024-10-08 16:23:09.874558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.558 BaseBdev2 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.558 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.817 spare_malloc 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.817 spare_delay 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.817 [2024-10-08 16:23:09.929262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.817 [2024-10-08 16:23:09.929341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.817 [2024-10-08 16:23:09.929374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:16.817 [2024-10-08 16:23:09.929393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.817 [2024-10-08 16:23:09.932477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.817 [2024-10-08 16:23:09.932541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.817 spare 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.817 [2024-10-08 16:23:09.937475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.817 [2024-10-08 16:23:09.939985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.817 [2024-10-08 16:23:09.940222] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.817 [2024-10-08 16:23:09.940245] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:16.817 [2024-10-08 16:23:09.940641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:16.817 [2024-10-08 16:23:09.940887] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.817 [2024-10-08 16:23:09.940913] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.817 [2024-10-08 16:23:09.941131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.817 "name": "raid_bdev1", 00:15:16.817 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:16.817 "strip_size_kb": 0, 00:15:16.817 "state": "online", 00:15:16.817 "raid_level": "raid1", 00:15:16.817 "superblock": true, 00:15:16.817 "num_base_bdevs": 2, 00:15:16.817 "num_base_bdevs_discovered": 2, 00:15:16.817 "num_base_bdevs_operational": 2, 00:15:16.817 "base_bdevs_list": [ 00:15:16.817 { 00:15:16.817 "name": "BaseBdev1", 00:15:16.817 "uuid": "14463110-7f35-5396-a5e6-9867e8727d2d", 00:15:16.817 "is_configured": true, 00:15:16.817 "data_offset": 2048, 00:15:16.817 "data_size": 63488 00:15:16.817 }, 00:15:16.817 { 00:15:16.817 "name": "BaseBdev2", 00:15:16.817 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:16.817 "is_configured": true, 00:15:16.817 "data_offset": 2048, 00:15:16.817 "data_size": 63488 00:15:16.817 } 00:15:16.817 ] 00:15:16.817 }' 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.817 16:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.414 [2024-10-08 16:23:10.505986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.414 [2024-10-08 16:23:10.601602] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.414 "name": "raid_bdev1", 00:15:17.414 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:17.414 "strip_size_kb": 0, 00:15:17.414 "state": "online", 00:15:17.414 "raid_level": "raid1", 00:15:17.414 "superblock": true, 00:15:17.414 "num_base_bdevs": 2, 00:15:17.414 "num_base_bdevs_discovered": 1, 00:15:17.414 "num_base_bdevs_operational": 1, 00:15:17.414 "base_bdevs_list": [ 00:15:17.414 { 00:15:17.414 "name": null, 00:15:17.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.414 "is_configured": false, 00:15:17.414 "data_offset": 0, 00:15:17.414 "data_size": 63488 00:15:17.414 }, 00:15:17.414 { 00:15:17.414 "name": "BaseBdev2", 00:15:17.414 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:17.414 "is_configured": true, 00:15:17.414 "data_offset": 2048, 00:15:17.414 "data_size": 63488 00:15:17.414 } 00:15:17.414 ] 00:15:17.414 }' 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.414 16:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.414 [2024-10-08 16:23:10.721964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:17.414 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.414 Zero copy mechanism will not be used. 00:15:17.414 Running I/O for 60 seconds... 00:15:17.980 16:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.980 16:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.980 16:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.980 [2024-10-08 16:23:11.154721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.980 16:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.980 16:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:17.980 [2024-10-08 16:23:11.239866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:17.980 [2024-10-08 16:23:11.242491] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.236 [2024-10-08 16:23:11.391749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:18.496 [2024-10-08 16:23:11.634818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:18.496 [2024-10-08 16:23:11.635240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:18.754 174.00 IOPS, 522.00 MiB/s [2024-10-08T16:23:12.077Z] [2024-10-08 16:23:11.974644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:18.755 [2024-10-08 16:23:11.975351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:19.012 [2024-10-08 16:23:12.195103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:19.012 [2024-10-08 16:23:12.195569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:19.012 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.012 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.013 "name": "raid_bdev1", 00:15:19.013 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:19.013 "strip_size_kb": 0, 00:15:19.013 "state": "online", 00:15:19.013 "raid_level": "raid1", 00:15:19.013 "superblock": true, 00:15:19.013 "num_base_bdevs": 2, 00:15:19.013 "num_base_bdevs_discovered": 2, 00:15:19.013 "num_base_bdevs_operational": 2, 00:15:19.013 "process": { 00:15:19.013 "type": "rebuild", 00:15:19.013 "target": "spare", 00:15:19.013 "progress": { 00:15:19.013 "blocks": 10240, 00:15:19.013 "percent": 16 00:15:19.013 } 00:15:19.013 }, 00:15:19.013 "base_bdevs_list": [ 00:15:19.013 { 00:15:19.013 "name": "spare", 00:15:19.013 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:19.013 "is_configured": true, 00:15:19.013 "data_offset": 2048, 00:15:19.013 "data_size": 63488 00:15:19.013 }, 00:15:19.013 { 00:15:19.013 "name": "BaseBdev2", 00:15:19.013 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:19.013 "is_configured": true, 00:15:19.013 "data_offset": 2048, 00:15:19.013 "data_size": 63488 00:15:19.013 } 00:15:19.013 ] 00:15:19.013 }' 00:15:19.013 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.272 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.272 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.272 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.273 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:19.273 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.273 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.273 [2024-10-08 16:23:12.385374] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.273 [2024-10-08 16:23:12.420172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:19.273 [2024-10-08 16:23:12.528247] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.273 [2024-10-08 16:23:12.530936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.273 [2024-10-08 16:23:12.530977] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.273 [2024-10-08 16:23:12.530997] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.273 [2024-10-08 16:23:12.578230] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.531 "name": "raid_bdev1", 00:15:19.531 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:19.531 "strip_size_kb": 0, 00:15:19.531 "state": "online", 00:15:19.531 "raid_level": "raid1", 00:15:19.531 "superblock": true, 00:15:19.531 "num_base_bdevs": 2, 00:15:19.531 "num_base_bdevs_discovered": 1, 00:15:19.531 "num_base_bdevs_operational": 1, 00:15:19.531 "base_bdevs_list": [ 00:15:19.531 { 00:15:19.531 "name": null, 00:15:19.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.531 "is_configured": false, 00:15:19.531 "data_offset": 0, 00:15:19.531 "data_size": 63488 00:15:19.531 }, 00:15:19.531 { 00:15:19.531 "name": "BaseBdev2", 00:15:19.531 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:19.531 "is_configured": true, 00:15:19.531 "data_offset": 2048, 00:15:19.531 "data_size": 63488 00:15:19.531 } 00:15:19.531 ] 00:15:19.531 }' 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.531 16:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.101 131.00 IOPS, 393.00 MiB/s [2024-10-08T16:23:13.423Z] 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.101 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.101 "name": "raid_bdev1", 00:15:20.101 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:20.101 "strip_size_kb": 0, 00:15:20.101 "state": "online", 00:15:20.101 "raid_level": "raid1", 00:15:20.101 "superblock": true, 00:15:20.101 "num_base_bdevs": 2, 00:15:20.101 "num_base_bdevs_discovered": 1, 00:15:20.101 "num_base_bdevs_operational": 1, 00:15:20.101 "base_bdevs_list": [ 00:15:20.101 { 00:15:20.101 "name": null, 00:15:20.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.101 "is_configured": false, 00:15:20.101 "data_offset": 0, 00:15:20.101 "data_size": 63488 00:15:20.101 }, 00:15:20.101 { 00:15:20.101 "name": "BaseBdev2", 00:15:20.101 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:20.102 "is_configured": true, 00:15:20.102 "data_offset": 2048, 00:15:20.102 "data_size": 63488 00:15:20.102 } 00:15:20.102 ] 00:15:20.102 }' 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.102 [2024-10-08 16:23:13.301397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.102 16:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:20.102 [2024-10-08 16:23:13.376809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:20.102 [2024-10-08 16:23:13.379348] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.360 [2024-10-08 16:23:13.505518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:20.618 [2024-10-08 16:23:13.716271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:20.618 [2024-10-08 16:23:13.716728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:20.876 150.67 IOPS, 452.00 MiB/s [2024-10-08T16:23:14.198Z] [2024-10-08 16:23:14.057940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:20.876 [2024-10-08 16:23:14.058704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.135 "name": "raid_bdev1", 00:15:21.135 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:21.135 "strip_size_kb": 0, 00:15:21.135 "state": "online", 00:15:21.135 "raid_level": "raid1", 00:15:21.135 "superblock": true, 00:15:21.135 "num_base_bdevs": 2, 00:15:21.135 "num_base_bdevs_discovered": 2, 00:15:21.135 "num_base_bdevs_operational": 2, 00:15:21.135 "process": { 00:15:21.135 "type": "rebuild", 00:15:21.135 "target": "spare", 00:15:21.135 "progress": { 00:15:21.135 "blocks": 10240, 00:15:21.135 "percent": 16 00:15:21.135 } 00:15:21.135 }, 00:15:21.135 "base_bdevs_list": [ 00:15:21.135 { 00:15:21.135 "name": "spare", 00:15:21.135 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:21.135 "is_configured": true, 00:15:21.135 "data_offset": 2048, 00:15:21.135 "data_size": 63488 00:15:21.135 }, 00:15:21.135 { 00:15:21.135 "name": "BaseBdev2", 00:15:21.135 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:21.135 "is_configured": true, 00:15:21.135 "data_offset": 2048, 00:15:21.135 "data_size": 63488 00:15:21.135 } 00:15:21.135 ] 00:15:21.135 }' 00:15:21.135 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.393 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.393 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:21.394 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=464 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.394 [2024-10-08 16:23:14.553287] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.394 "name": "raid_bdev1", 00:15:21.394 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:21.394 "strip_size_kb": 0, 00:15:21.394 "state": "online", 00:15:21.394 "raid_level": "raid1", 00:15:21.394 "superblock": true, 00:15:21.394 "num_base_bdevs": 2, 00:15:21.394 "num_base_bdevs_discovered": 2, 00:15:21.394 "num_base_bdevs_operational": 2, 00:15:21.394 "process": { 00:15:21.394 "type": "rebuild", 00:15:21.394 "target": "spare", 00:15:21.394 "progress": { 00:15:21.394 "blocks": 12288, 00:15:21.394 "percent": 19 00:15:21.394 } 00:15:21.394 }, 00:15:21.394 "base_bdevs_list": [ 00:15:21.394 { 00:15:21.394 "name": "spare", 00:15:21.394 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:21.394 "is_configured": true, 00:15:21.394 "data_offset": 2048, 00:15:21.394 "data_size": 63488 00:15:21.394 }, 00:15:21.394 { 00:15:21.394 "name": "BaseBdev2", 00:15:21.394 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:21.394 "is_configured": true, 00:15:21.394 "data_offset": 2048, 00:15:21.394 "data_size": 63488 00:15:21.394 } 00:15:21.394 ] 00:15:21.394 }' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.394 16:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.394 [2024-10-08 16:23:14.707983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:21.911 141.50 IOPS, 424.50 MiB/s [2024-10-08T16:23:15.233Z] [2024-10-08 16:23:15.032720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:22.170 [2024-10-08 16:23:15.263019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:22.170 [2024-10-08 16:23:15.389629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:22.170 [2024-10-08 16:23:15.389899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.428 "name": "raid_bdev1", 00:15:22.428 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:22.428 "strip_size_kb": 0, 00:15:22.428 "state": "online", 00:15:22.428 "raid_level": "raid1", 00:15:22.428 "superblock": true, 00:15:22.428 "num_base_bdevs": 2, 00:15:22.428 "num_base_bdevs_discovered": 2, 00:15:22.428 "num_base_bdevs_operational": 2, 00:15:22.428 "process": { 00:15:22.428 "type": "rebuild", 00:15:22.428 "target": "spare", 00:15:22.428 "progress": { 00:15:22.428 "blocks": 32768, 00:15:22.428 "percent": 51 00:15:22.428 } 00:15:22.428 }, 00:15:22.428 "base_bdevs_list": [ 00:15:22.428 { 00:15:22.428 "name": "spare", 00:15:22.428 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:22.428 "is_configured": true, 00:15:22.428 "data_offset": 2048, 00:15:22.428 "data_size": 63488 00:15:22.428 }, 00:15:22.428 { 00:15:22.428 "name": "BaseBdev2", 00:15:22.428 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:22.428 "is_configured": true, 00:15:22.428 "data_offset": 2048, 00:15:22.428 "data_size": 63488 00:15:22.428 } 00:15:22.428 ] 00:15:22.428 }' 00:15:22.428 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.687 122.40 IOPS, 367.20 MiB/s [2024-10-08T16:23:16.009Z] 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.687 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.687 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.687 16:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.687 [2024-10-08 16:23:15.971312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:23.288 [2024-10-08 16:23:16.584846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:23.547 108.67 IOPS, 326.00 MiB/s [2024-10-08T16:23:16.869Z] 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.547 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.020 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.020 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.020 "name": "raid_bdev1", 00:15:24.020 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:24.020 "strip_size_kb": 0, 00:15:24.020 "state": "online", 00:15:24.020 "raid_level": "raid1", 00:15:24.020 "superblock": true, 00:15:24.020 "num_base_bdevs": 2, 00:15:24.020 "num_base_bdevs_discovered": 2, 00:15:24.020 "num_base_bdevs_operational": 2, 00:15:24.020 "process": { 00:15:24.020 "type": "rebuild", 00:15:24.020 "target": "spare", 00:15:24.020 "progress": { 00:15:24.020 "blocks": 49152, 00:15:24.020 "percent": 77 00:15:24.020 } 00:15:24.020 }, 00:15:24.020 "base_bdevs_list": [ 00:15:24.020 { 00:15:24.020 "name": "spare", 00:15:24.020 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:24.020 "is_configured": true, 00:15:24.020 "data_offset": 2048, 00:15:24.020 "data_size": 63488 00:15:24.020 }, 00:15:24.020 { 00:15:24.020 "name": "BaseBdev2", 00:15:24.020 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:24.020 "is_configured": true, 00:15:24.020 "data_offset": 2048, 00:15:24.020 "data_size": 63488 00:15:24.020 } 00:15:24.020 ] 00:15:24.020 }' 00:15:24.020 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.020 [2024-10-08 16:23:16.929712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:24.020 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.020 16:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.020 16:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.020 16:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.020 [2024-10-08 16:23:17.140619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:24.283 [2024-10-08 16:23:17.371347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:24.541 98.00 IOPS, 294.00 MiB/s [2024-10-08T16:23:17.863Z] [2024-10-08 16:23:17.811624] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:24.799 [2024-10-08 16:23:17.911644] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:24.799 [2024-10-08 16:23:17.914112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.799 "name": "raid_bdev1", 00:15:24.799 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:24.799 "strip_size_kb": 0, 00:15:24.799 "state": "online", 00:15:24.799 "raid_level": "raid1", 00:15:24.799 "superblock": true, 00:15:24.799 "num_base_bdevs": 2, 00:15:24.799 "num_base_bdevs_discovered": 2, 00:15:24.799 "num_base_bdevs_operational": 2, 00:15:24.799 "base_bdevs_list": [ 00:15:24.799 { 00:15:24.799 "name": "spare", 00:15:24.799 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:24.799 "is_configured": true, 00:15:24.799 "data_offset": 2048, 00:15:24.799 "data_size": 63488 00:15:24.799 }, 00:15:24.799 { 00:15:24.799 "name": "BaseBdev2", 00:15:24.799 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:24.799 "is_configured": true, 00:15:24.799 "data_offset": 2048, 00:15:24.799 "data_size": 63488 00:15:24.799 } 00:15:24.799 ] 00:15:24.799 }' 00:15:24.799 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.058 "name": "raid_bdev1", 00:15:25.058 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:25.058 "strip_size_kb": 0, 00:15:25.058 "state": "online", 00:15:25.058 "raid_level": "raid1", 00:15:25.058 "superblock": true, 00:15:25.058 "num_base_bdevs": 2, 00:15:25.058 "num_base_bdevs_discovered": 2, 00:15:25.058 "num_base_bdevs_operational": 2, 00:15:25.058 "base_bdevs_list": [ 00:15:25.058 { 00:15:25.058 "name": "spare", 00:15:25.058 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:25.058 "is_configured": true, 00:15:25.058 "data_offset": 2048, 00:15:25.058 "data_size": 63488 00:15:25.058 }, 00:15:25.058 { 00:15:25.058 "name": "BaseBdev2", 00:15:25.058 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:25.058 "is_configured": true, 00:15:25.058 "data_offset": 2048, 00:15:25.058 "data_size": 63488 00:15:25.058 } 00:15:25.058 ] 00:15:25.058 }' 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.058 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.059 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.059 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.059 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.059 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.059 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.059 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.317 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.317 "name": "raid_bdev1", 00:15:25.317 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:25.317 "strip_size_kb": 0, 00:15:25.317 "state": "online", 00:15:25.317 "raid_level": "raid1", 00:15:25.317 "superblock": true, 00:15:25.317 "num_base_bdevs": 2, 00:15:25.317 "num_base_bdevs_discovered": 2, 00:15:25.317 "num_base_bdevs_operational": 2, 00:15:25.317 "base_bdevs_list": [ 00:15:25.317 { 00:15:25.317 "name": "spare", 00:15:25.317 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:25.317 "is_configured": true, 00:15:25.317 "data_offset": 2048, 00:15:25.317 "data_size": 63488 00:15:25.317 }, 00:15:25.317 { 00:15:25.317 "name": "BaseBdev2", 00:15:25.317 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:25.317 "is_configured": true, 00:15:25.317 "data_offset": 2048, 00:15:25.317 "data_size": 63488 00:15:25.317 } 00:15:25.317 ] 00:15:25.317 }' 00:15:25.317 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.317 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.574 90.12 IOPS, 270.38 MiB/s [2024-10-08T16:23:18.896Z] 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:25.574 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.574 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.574 [2024-10-08 16:23:18.842374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.574 [2024-10-08 16:23:18.842428] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.833 00:15:25.833 Latency(us) 00:15:25.833 [2024-10-08T16:23:19.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.833 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:25.833 raid_bdev1 : 8.20 88.76 266.29 0.00 0.00 15394.87 281.13 117726.49 00:15:25.833 [2024-10-08T16:23:19.155Z] =================================================================================================================== 00:15:25.833 [2024-10-08T16:23:19.155Z] Total : 88.76 266.29 0.00 0.00 15394.87 281.13 117726.49 00:15:25.833 [2024-10-08 16:23:18.946248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.833 [2024-10-08 16:23:18.946315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.833 [2024-10-08 16:23:18.946434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.833 [2024-10-08 16:23:18.946451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:25.833 { 00:15:25.833 "results": [ 00:15:25.833 { 00:15:25.833 "job": "raid_bdev1", 00:15:25.833 "core_mask": "0x1", 00:15:25.833 "workload": "randrw", 00:15:25.833 "percentage": 50, 00:15:25.833 "status": "finished", 00:15:25.833 "queue_depth": 2, 00:15:25.833 "io_size": 3145728, 00:15:25.833 "runtime": 8.201499, 00:15:25.833 "iops": 88.76426126492242, 00:15:25.833 "mibps": 266.2927837947673, 00:15:25.833 "io_failed": 0, 00:15:25.833 "io_timeout": 0, 00:15:25.833 "avg_latency_us": 15394.86945054945, 00:15:25.833 "min_latency_us": 281.13454545454545, 00:15:25.833 "max_latency_us": 117726.48727272727 00:15:25.833 } 00:15:25.833 ], 00:15:25.833 "core_count": 1 00:15:25.833 } 00:15:25.833 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.833 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.833 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.833 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:25.833 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.833 16:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.833 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:26.092 /dev/nbd0 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.092 1+0 records in 00:15:26.092 1+0 records out 00:15:26.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445204 s, 9.2 MB/s 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.092 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:26.658 /dev/nbd1 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.658 1+0 records in 00:15:26.658 1+0 records out 00:15:26.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430441 s, 9.5 MB/s 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.658 16:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.917 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.175 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.434 [2024-10-08 16:23:20.509885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:27.434 [2024-10-08 16:23:20.509960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.434 [2024-10-08 16:23:20.509993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:27.434 [2024-10-08 16:23:20.510016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.434 [2024-10-08 16:23:20.512977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.434 [2024-10-08 16:23:20.513018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:27.434 [2024-10-08 16:23:20.513137] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:27.434 [2024-10-08 16:23:20.513204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.434 [2024-10-08 16:23:20.513391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.434 spare 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.434 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.434 [2024-10-08 16:23:20.613543] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:27.434 [2024-10-08 16:23:20.613632] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:27.434 [2024-10-08 16:23:20.614068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:27.434 [2024-10-08 16:23:20.614365] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:27.434 [2024-10-08 16:23:20.614392] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:27.434 [2024-10-08 16:23:20.614687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.435 "name": "raid_bdev1", 00:15:27.435 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:27.435 "strip_size_kb": 0, 00:15:27.435 "state": "online", 00:15:27.435 "raid_level": "raid1", 00:15:27.435 "superblock": true, 00:15:27.435 "num_base_bdevs": 2, 00:15:27.435 "num_base_bdevs_discovered": 2, 00:15:27.435 "num_base_bdevs_operational": 2, 00:15:27.435 "base_bdevs_list": [ 00:15:27.435 { 00:15:27.435 "name": "spare", 00:15:27.435 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:27.435 "is_configured": true, 00:15:27.435 "data_offset": 2048, 00:15:27.435 "data_size": 63488 00:15:27.435 }, 00:15:27.435 { 00:15:27.435 "name": "BaseBdev2", 00:15:27.435 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:27.435 "is_configured": true, 00:15:27.435 "data_offset": 2048, 00:15:27.435 "data_size": 63488 00:15:27.435 } 00:15:27.435 ] 00:15:27.435 }' 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.435 16:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.002 "name": "raid_bdev1", 00:15:28.002 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:28.002 "strip_size_kb": 0, 00:15:28.002 "state": "online", 00:15:28.002 "raid_level": "raid1", 00:15:28.002 "superblock": true, 00:15:28.002 "num_base_bdevs": 2, 00:15:28.002 "num_base_bdevs_discovered": 2, 00:15:28.002 "num_base_bdevs_operational": 2, 00:15:28.002 "base_bdevs_list": [ 00:15:28.002 { 00:15:28.002 "name": "spare", 00:15:28.002 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:28.002 "is_configured": true, 00:15:28.002 "data_offset": 2048, 00:15:28.002 "data_size": 63488 00:15:28.002 }, 00:15:28.002 { 00:15:28.002 "name": "BaseBdev2", 00:15:28.002 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:28.002 "is_configured": true, 00:15:28.002 "data_offset": 2048, 00:15:28.002 "data_size": 63488 00:15:28.002 } 00:15:28.002 ] 00:15:28.002 }' 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.002 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.260 [2024-10-08 16:23:21.362961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.260 "name": "raid_bdev1", 00:15:28.260 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:28.260 "strip_size_kb": 0, 00:15:28.260 "state": "online", 00:15:28.260 "raid_level": "raid1", 00:15:28.260 "superblock": true, 00:15:28.260 "num_base_bdevs": 2, 00:15:28.260 "num_base_bdevs_discovered": 1, 00:15:28.260 "num_base_bdevs_operational": 1, 00:15:28.260 "base_bdevs_list": [ 00:15:28.260 { 00:15:28.260 "name": null, 00:15:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.260 "is_configured": false, 00:15:28.260 "data_offset": 0, 00:15:28.260 "data_size": 63488 00:15:28.260 }, 00:15:28.260 { 00:15:28.260 "name": "BaseBdev2", 00:15:28.260 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:28.260 "is_configured": true, 00:15:28.260 "data_offset": 2048, 00:15:28.260 "data_size": 63488 00:15:28.260 } 00:15:28.260 ] 00:15:28.260 }' 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.260 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.879 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.879 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.879 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.879 [2024-10-08 16:23:21.883256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.879 [2024-10-08 16:23:21.883494] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:28.879 [2024-10-08 16:23:21.883554] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:28.879 [2024-10-08 16:23:21.883601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.879 [2024-10-08 16:23:21.899026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:28.879 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.879 16:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:28.879 [2024-10-08 16:23:21.901593] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.843 "name": "raid_bdev1", 00:15:29.843 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:29.843 "strip_size_kb": 0, 00:15:29.843 "state": "online", 00:15:29.843 "raid_level": "raid1", 00:15:29.843 "superblock": true, 00:15:29.843 "num_base_bdevs": 2, 00:15:29.843 "num_base_bdevs_discovered": 2, 00:15:29.843 "num_base_bdevs_operational": 2, 00:15:29.843 "process": { 00:15:29.843 "type": "rebuild", 00:15:29.843 "target": "spare", 00:15:29.843 "progress": { 00:15:29.843 "blocks": 20480, 00:15:29.843 "percent": 32 00:15:29.843 } 00:15:29.843 }, 00:15:29.843 "base_bdevs_list": [ 00:15:29.843 { 00:15:29.843 "name": "spare", 00:15:29.843 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:29.843 "is_configured": true, 00:15:29.843 "data_offset": 2048, 00:15:29.843 "data_size": 63488 00:15:29.843 }, 00:15:29.843 { 00:15:29.843 "name": "BaseBdev2", 00:15:29.843 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:29.843 "is_configured": true, 00:15:29.843 "data_offset": 2048, 00:15:29.843 "data_size": 63488 00:15:29.843 } 00:15:29.843 ] 00:15:29.843 }' 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.843 16:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.843 [2024-10-08 16:23:23.055002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.843 [2024-10-08 16:23:23.110935] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:29.843 [2024-10-08 16:23:23.111054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.843 [2024-10-08 16:23:23.111080] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.843 [2024-10-08 16:23:23.111096] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.843 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.844 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.102 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.102 "name": "raid_bdev1", 00:15:30.102 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:30.102 "strip_size_kb": 0, 00:15:30.102 "state": "online", 00:15:30.102 "raid_level": "raid1", 00:15:30.102 "superblock": true, 00:15:30.102 "num_base_bdevs": 2, 00:15:30.102 "num_base_bdevs_discovered": 1, 00:15:30.102 "num_base_bdevs_operational": 1, 00:15:30.102 "base_bdevs_list": [ 00:15:30.102 { 00:15:30.102 "name": null, 00:15:30.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.102 "is_configured": false, 00:15:30.102 "data_offset": 0, 00:15:30.102 "data_size": 63488 00:15:30.102 }, 00:15:30.102 { 00:15:30.102 "name": "BaseBdev2", 00:15:30.102 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:30.102 "is_configured": true, 00:15:30.102 "data_offset": 2048, 00:15:30.102 "data_size": 63488 00:15:30.102 } 00:15:30.102 ] 00:15:30.102 }' 00:15:30.102 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.102 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.360 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:30.360 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.360 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.360 [2024-10-08 16:23:23.620358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:30.360 [2024-10-08 16:23:23.620468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.360 [2024-10-08 16:23:23.620504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:30.360 [2024-10-08 16:23:23.620523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.360 [2024-10-08 16:23:23.621241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.360 [2024-10-08 16:23:23.621272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:30.360 [2024-10-08 16:23:23.621390] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:30.360 [2024-10-08 16:23:23.621431] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:30.360 [2024-10-08 16:23:23.621446] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:30.360 [2024-10-08 16:23:23.621476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.360 [2024-10-08 16:23:23.637354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:30.360 spare 00:15:30.360 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.360 16:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:30.360 [2024-10-08 16:23:23.640057] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.738 "name": "raid_bdev1", 00:15:31.738 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:31.738 "strip_size_kb": 0, 00:15:31.738 "state": "online", 00:15:31.738 "raid_level": "raid1", 00:15:31.738 "superblock": true, 00:15:31.738 "num_base_bdevs": 2, 00:15:31.738 "num_base_bdevs_discovered": 2, 00:15:31.738 "num_base_bdevs_operational": 2, 00:15:31.738 "process": { 00:15:31.738 "type": "rebuild", 00:15:31.738 "target": "spare", 00:15:31.738 "progress": { 00:15:31.738 "blocks": 20480, 00:15:31.738 "percent": 32 00:15:31.738 } 00:15:31.738 }, 00:15:31.738 "base_bdevs_list": [ 00:15:31.738 { 00:15:31.738 "name": "spare", 00:15:31.738 "uuid": "83ad53c0-7ae7-509a-aaed-da4515ecf697", 00:15:31.738 "is_configured": true, 00:15:31.738 "data_offset": 2048, 00:15:31.738 "data_size": 63488 00:15:31.738 }, 00:15:31.738 { 00:15:31.738 "name": "BaseBdev2", 00:15:31.738 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:31.738 "is_configured": true, 00:15:31.738 "data_offset": 2048, 00:15:31.738 "data_size": 63488 00:15:31.738 } 00:15:31.738 ] 00:15:31.738 }' 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.738 [2024-10-08 16:23:24.813737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.738 [2024-10-08 16:23:24.849564] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:31.738 [2024-10-08 16:23:24.849727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.738 [2024-10-08 16:23:24.849758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.738 [2024-10-08 16:23:24.849771] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.738 "name": "raid_bdev1", 00:15:31.738 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:31.738 "strip_size_kb": 0, 00:15:31.738 "state": "online", 00:15:31.738 "raid_level": "raid1", 00:15:31.738 "superblock": true, 00:15:31.738 "num_base_bdevs": 2, 00:15:31.738 "num_base_bdevs_discovered": 1, 00:15:31.738 "num_base_bdevs_operational": 1, 00:15:31.738 "base_bdevs_list": [ 00:15:31.738 { 00:15:31.738 "name": null, 00:15:31.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.738 "is_configured": false, 00:15:31.738 "data_offset": 0, 00:15:31.738 "data_size": 63488 00:15:31.738 }, 00:15:31.738 { 00:15:31.738 "name": "BaseBdev2", 00:15:31.738 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:31.738 "is_configured": true, 00:15:31.738 "data_offset": 2048, 00:15:31.738 "data_size": 63488 00:15:31.738 } 00:15:31.738 ] 00:15:31.738 }' 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.738 16:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.307 "name": "raid_bdev1", 00:15:32.307 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:32.307 "strip_size_kb": 0, 00:15:32.307 "state": "online", 00:15:32.307 "raid_level": "raid1", 00:15:32.307 "superblock": true, 00:15:32.307 "num_base_bdevs": 2, 00:15:32.307 "num_base_bdevs_discovered": 1, 00:15:32.307 "num_base_bdevs_operational": 1, 00:15:32.307 "base_bdevs_list": [ 00:15:32.307 { 00:15:32.307 "name": null, 00:15:32.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.307 "is_configured": false, 00:15:32.307 "data_offset": 0, 00:15:32.307 "data_size": 63488 00:15:32.307 }, 00:15:32.307 { 00:15:32.307 "name": "BaseBdev2", 00:15:32.307 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:32.307 "is_configured": true, 00:15:32.307 "data_offset": 2048, 00:15:32.307 "data_size": 63488 00:15:32.307 } 00:15:32.307 ] 00:15:32.307 }' 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.307 [2024-10-08 16:23:25.567412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:32.307 [2024-10-08 16:23:25.567476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.307 [2024-10-08 16:23:25.567508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:32.307 [2024-10-08 16:23:25.567539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.307 [2024-10-08 16:23:25.568145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.307 [2024-10-08 16:23:25.568177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.307 [2024-10-08 16:23:25.568330] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:32.307 [2024-10-08 16:23:25.568367] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:32.307 [2024-10-08 16:23:25.568384] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:32.307 [2024-10-08 16:23:25.568402] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:32.307 BaseBdev1 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.307 16:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.296 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.554 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.554 "name": "raid_bdev1", 00:15:33.554 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:33.554 "strip_size_kb": 0, 00:15:33.554 "state": "online", 00:15:33.555 "raid_level": "raid1", 00:15:33.555 "superblock": true, 00:15:33.555 "num_base_bdevs": 2, 00:15:33.555 "num_base_bdevs_discovered": 1, 00:15:33.555 "num_base_bdevs_operational": 1, 00:15:33.555 "base_bdevs_list": [ 00:15:33.555 { 00:15:33.555 "name": null, 00:15:33.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.555 "is_configured": false, 00:15:33.555 "data_offset": 0, 00:15:33.555 "data_size": 63488 00:15:33.555 }, 00:15:33.555 { 00:15:33.555 "name": "BaseBdev2", 00:15:33.555 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:33.555 "is_configured": true, 00:15:33.555 "data_offset": 2048, 00:15:33.555 "data_size": 63488 00:15:33.555 } 00:15:33.555 ] 00:15:33.555 }' 00:15:33.555 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.555 16:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.813 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.072 "name": "raid_bdev1", 00:15:34.072 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:34.072 "strip_size_kb": 0, 00:15:34.072 "state": "online", 00:15:34.072 "raid_level": "raid1", 00:15:34.072 "superblock": true, 00:15:34.072 "num_base_bdevs": 2, 00:15:34.072 "num_base_bdevs_discovered": 1, 00:15:34.072 "num_base_bdevs_operational": 1, 00:15:34.072 "base_bdevs_list": [ 00:15:34.072 { 00:15:34.072 "name": null, 00:15:34.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.072 "is_configured": false, 00:15:34.072 "data_offset": 0, 00:15:34.072 "data_size": 63488 00:15:34.072 }, 00:15:34.072 { 00:15:34.072 "name": "BaseBdev2", 00:15:34.072 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:34.072 "is_configured": true, 00:15:34.072 "data_offset": 2048, 00:15:34.072 "data_size": 63488 00:15:34.072 } 00:15:34.072 ] 00:15:34.072 }' 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.072 [2024-10-08 16:23:27.288417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.072 [2024-10-08 16:23:27.288674] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.072 [2024-10-08 16:23:27.288700] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:34.072 request: 00:15:34.072 { 00:15:34.072 "base_bdev": "BaseBdev1", 00:15:34.072 "raid_bdev": "raid_bdev1", 00:15:34.072 "method": "bdev_raid_add_base_bdev", 00:15:34.072 "req_id": 1 00:15:34.072 } 00:15:34.072 Got JSON-RPC error response 00:15:34.072 response: 00:15:34.072 { 00:15:34.072 "code": -22, 00:15:34.072 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:34.072 } 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.072 16:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.264 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.264 "name": "raid_bdev1", 00:15:35.264 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:35.264 "strip_size_kb": 0, 00:15:35.264 "state": "online", 00:15:35.264 "raid_level": "raid1", 00:15:35.264 "superblock": true, 00:15:35.264 "num_base_bdevs": 2, 00:15:35.264 "num_base_bdevs_discovered": 1, 00:15:35.264 "num_base_bdevs_operational": 1, 00:15:35.264 "base_bdevs_list": [ 00:15:35.264 { 00:15:35.264 "name": null, 00:15:35.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.264 "is_configured": false, 00:15:35.264 "data_offset": 0, 00:15:35.264 "data_size": 63488 00:15:35.264 }, 00:15:35.264 { 00:15:35.264 "name": "BaseBdev2", 00:15:35.264 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:35.264 "is_configured": true, 00:15:35.264 "data_offset": 2048, 00:15:35.264 "data_size": 63488 00:15:35.264 } 00:15:35.264 ] 00:15:35.264 }' 00:15:35.264 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.264 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.832 "name": "raid_bdev1", 00:15:35.832 "uuid": "b6f4ad1a-0464-4947-9612-de6fb43d952e", 00:15:35.832 "strip_size_kb": 0, 00:15:35.832 "state": "online", 00:15:35.832 "raid_level": "raid1", 00:15:35.832 "superblock": true, 00:15:35.832 "num_base_bdevs": 2, 00:15:35.832 "num_base_bdevs_discovered": 1, 00:15:35.832 "num_base_bdevs_operational": 1, 00:15:35.832 "base_bdevs_list": [ 00:15:35.832 { 00:15:35.832 "name": null, 00:15:35.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.832 "is_configured": false, 00:15:35.832 "data_offset": 0, 00:15:35.832 "data_size": 63488 00:15:35.832 }, 00:15:35.832 { 00:15:35.832 "name": "BaseBdev2", 00:15:35.832 "uuid": "7291d332-987d-5119-a676-d07c1c6f8b31", 00:15:35.832 "is_configured": true, 00:15:35.832 "data_offset": 2048, 00:15:35.832 "data_size": 63488 00:15:35.832 } 00:15:35.832 ] 00:15:35.832 }' 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.832 16:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77428 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77428 ']' 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77428 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77428 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.832 killing process with pid 77428 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77428' 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77428 00:15:35.832 Received shutdown signal, test time was about 18.356718 seconds 00:15:35.832 00:15:35.832 Latency(us) 00:15:35.832 [2024-10-08T16:23:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.832 [2024-10-08T16:23:29.154Z] =================================================================================================================== 00:15:35.832 [2024-10-08T16:23:29.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.832 [2024-10-08 16:23:29.081396] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.832 16:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77428 00:15:35.832 [2024-10-08 16:23:29.081652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.832 [2024-10-08 16:23:29.081739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.832 [2024-10-08 16:23:29.081766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:36.090 [2024-10-08 16:23:29.298555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:37.466 00:15:37.466 real 0m21.933s 00:15:37.466 user 0m29.762s 00:15:37.466 sys 0m2.159s 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.466 ************************************ 00:15:37.466 END TEST raid_rebuild_test_sb_io 00:15:37.466 ************************************ 00:15:37.466 16:23:30 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:37.466 16:23:30 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:37.466 16:23:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:37.466 16:23:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.466 16:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.466 ************************************ 00:15:37.466 START TEST raid_rebuild_test 00:15:37.466 ************************************ 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78134 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78134 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 78134 ']' 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.466 16:23:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.466 [2024-10-08 16:23:30.746487] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:15:37.466 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:37.466 Zero copy mechanism will not be used. 00:15:37.466 [2024-10-08 16:23:30.746728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78134 ] 00:15:37.724 [2024-10-08 16:23:30.926611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.982 [2024-10-08 16:23:31.174034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.240 [2024-10-08 16:23:31.381021] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.240 [2024-10-08 16:23:31.381081] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.498 BaseBdev1_malloc 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.498 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 [2024-10-08 16:23:31.820067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.756 [2024-10-08 16:23:31.820149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.756 [2024-10-08 16:23:31.820180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:38.756 [2024-10-08 16:23:31.820202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.756 [2024-10-08 16:23:31.823148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.756 [2024-10-08 16:23:31.823190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.756 BaseBdev1 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 BaseBdev2_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 [2024-10-08 16:23:31.886639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:38.756 [2024-10-08 16:23:31.886722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.756 [2024-10-08 16:23:31.886749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:38.756 [2024-10-08 16:23:31.886776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.756 [2024-10-08 16:23:31.889617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.756 [2024-10-08 16:23:31.889661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:38.756 BaseBdev2 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 BaseBdev3_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 [2024-10-08 16:23:31.940073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:38.756 [2024-10-08 16:23:31.940147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.756 [2024-10-08 16:23:31.940176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:38.756 [2024-10-08 16:23:31.940207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.756 [2024-10-08 16:23:31.943049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.756 [2024-10-08 16:23:31.943093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:38.756 BaseBdev3 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 BaseBdev4_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 [2024-10-08 16:23:31.993426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:38.756 [2024-10-08 16:23:31.993520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.756 [2024-10-08 16:23:31.993581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:38.756 [2024-10-08 16:23:31.993602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.756 [2024-10-08 16:23:31.996458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.756 [2024-10-08 16:23:31.996502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:38.756 BaseBdev4 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 spare_malloc 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 spare_delay 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 [2024-10-08 16:23:32.054687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.756 [2024-10-08 16:23:32.054770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.756 [2024-10-08 16:23:32.054802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:38.756 [2024-10-08 16:23:32.054821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.756 [2024-10-08 16:23:32.057646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.756 [2024-10-08 16:23:32.057690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.756 spare 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.756 [2024-10-08 16:23:32.062776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.756 [2024-10-08 16:23:32.065281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.756 [2024-10-08 16:23:32.065377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.756 [2024-10-08 16:23:32.065455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.756 [2024-10-08 16:23:32.065595] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:38.756 [2024-10-08 16:23:32.065631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:38.756 [2024-10-08 16:23:32.066006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:38.756 [2024-10-08 16:23:32.066253] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:38.756 [2024-10-08 16:23:32.066293] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:38.756 [2024-10-08 16:23:32.066505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.756 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.014 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.014 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.014 "name": "raid_bdev1", 00:15:39.014 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:39.014 "strip_size_kb": 0, 00:15:39.014 "state": "online", 00:15:39.014 "raid_level": "raid1", 00:15:39.014 "superblock": false, 00:15:39.014 "num_base_bdevs": 4, 00:15:39.014 "num_base_bdevs_discovered": 4, 00:15:39.014 "num_base_bdevs_operational": 4, 00:15:39.014 "base_bdevs_list": [ 00:15:39.014 { 00:15:39.014 "name": "BaseBdev1", 00:15:39.014 "uuid": "8d47cb61-a42f-5c01-b072-749b90ddf9d6", 00:15:39.014 "is_configured": true, 00:15:39.014 "data_offset": 0, 00:15:39.014 "data_size": 65536 00:15:39.014 }, 00:15:39.014 { 00:15:39.014 "name": "BaseBdev2", 00:15:39.014 "uuid": "3be9a1da-1e12-5645-b36c-42374d422391", 00:15:39.014 "is_configured": true, 00:15:39.014 "data_offset": 0, 00:15:39.014 "data_size": 65536 00:15:39.014 }, 00:15:39.014 { 00:15:39.014 "name": "BaseBdev3", 00:15:39.014 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:39.014 "is_configured": true, 00:15:39.014 "data_offset": 0, 00:15:39.014 "data_size": 65536 00:15:39.014 }, 00:15:39.014 { 00:15:39.014 "name": "BaseBdev4", 00:15:39.014 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:39.014 "is_configured": true, 00:15:39.014 "data_offset": 0, 00:15:39.014 "data_size": 65536 00:15:39.014 } 00:15:39.014 ] 00:15:39.014 }' 00:15:39.014 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.014 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.273 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.273 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:39.273 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.273 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.273 [2024-10-08 16:23:32.595343] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.532 16:23:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:39.790 [2024-10-08 16:23:32.983110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:39.790 /dev/nbd0 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.790 1+0 records in 00:15:39.790 1+0 records out 00:15:39.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246745 s, 16.6 MB/s 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:39.790 16:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:49.774 65536+0 records in 00:15:49.774 65536+0 records out 00:15:49.774 33554432 bytes (34 MB, 32 MiB) copied, 8.72005 s, 3.8 MB/s 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.774 16:23:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.774 [2024-10-08 16:23:42.019503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.774 [2024-10-08 16:23:42.055711] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.774 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.775 "name": "raid_bdev1", 00:15:49.775 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:49.775 "strip_size_kb": 0, 00:15:49.775 "state": "online", 00:15:49.775 "raid_level": "raid1", 00:15:49.775 "superblock": false, 00:15:49.775 "num_base_bdevs": 4, 00:15:49.775 "num_base_bdevs_discovered": 3, 00:15:49.775 "num_base_bdevs_operational": 3, 00:15:49.775 "base_bdevs_list": [ 00:15:49.775 { 00:15:49.775 "name": null, 00:15:49.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.775 "is_configured": false, 00:15:49.775 "data_offset": 0, 00:15:49.775 "data_size": 65536 00:15:49.775 }, 00:15:49.775 { 00:15:49.775 "name": "BaseBdev2", 00:15:49.775 "uuid": "3be9a1da-1e12-5645-b36c-42374d422391", 00:15:49.775 "is_configured": true, 00:15:49.775 "data_offset": 0, 00:15:49.775 "data_size": 65536 00:15:49.775 }, 00:15:49.775 { 00:15:49.775 "name": "BaseBdev3", 00:15:49.775 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:49.775 "is_configured": true, 00:15:49.775 "data_offset": 0, 00:15:49.775 "data_size": 65536 00:15:49.775 }, 00:15:49.775 { 00:15:49.775 "name": "BaseBdev4", 00:15:49.775 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:49.775 "is_configured": true, 00:15:49.775 "data_offset": 0, 00:15:49.775 "data_size": 65536 00:15:49.775 } 00:15:49.775 ] 00:15:49.775 }' 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.775 [2024-10-08 16:23:42.575936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.775 [2024-10-08 16:23:42.589051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.775 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:49.775 [2024-10-08 16:23:42.591686] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.340 "name": "raid_bdev1", 00:15:50.340 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:50.340 "strip_size_kb": 0, 00:15:50.340 "state": "online", 00:15:50.340 "raid_level": "raid1", 00:15:50.340 "superblock": false, 00:15:50.340 "num_base_bdevs": 4, 00:15:50.340 "num_base_bdevs_discovered": 4, 00:15:50.340 "num_base_bdevs_operational": 4, 00:15:50.340 "process": { 00:15:50.340 "type": "rebuild", 00:15:50.340 "target": "spare", 00:15:50.340 "progress": { 00:15:50.340 "blocks": 20480, 00:15:50.340 "percent": 31 00:15:50.340 } 00:15:50.340 }, 00:15:50.340 "base_bdevs_list": [ 00:15:50.340 { 00:15:50.340 "name": "spare", 00:15:50.340 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:50.340 "is_configured": true, 00:15:50.340 "data_offset": 0, 00:15:50.340 "data_size": 65536 00:15:50.340 }, 00:15:50.340 { 00:15:50.340 "name": "BaseBdev2", 00:15:50.340 "uuid": "3be9a1da-1e12-5645-b36c-42374d422391", 00:15:50.340 "is_configured": true, 00:15:50.340 "data_offset": 0, 00:15:50.340 "data_size": 65536 00:15:50.340 }, 00:15:50.340 { 00:15:50.340 "name": "BaseBdev3", 00:15:50.340 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:50.340 "is_configured": true, 00:15:50.340 "data_offset": 0, 00:15:50.340 "data_size": 65536 00:15:50.340 }, 00:15:50.340 { 00:15:50.340 "name": "BaseBdev4", 00:15:50.340 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:50.340 "is_configured": true, 00:15:50.340 "data_offset": 0, 00:15:50.340 "data_size": 65536 00:15:50.340 } 00:15:50.340 ] 00:15:50.340 }' 00:15:50.340 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 [2024-10-08 16:23:43.765681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.598 [2024-10-08 16:23:43.801085] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.598 [2024-10-08 16:23:43.801185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.598 [2024-10-08 16:23:43.801224] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.598 [2024-10-08 16:23:43.801238] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.598 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.598 "name": "raid_bdev1", 00:15:50.598 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:50.598 "strip_size_kb": 0, 00:15:50.598 "state": "online", 00:15:50.598 "raid_level": "raid1", 00:15:50.598 "superblock": false, 00:15:50.598 "num_base_bdevs": 4, 00:15:50.598 "num_base_bdevs_discovered": 3, 00:15:50.598 "num_base_bdevs_operational": 3, 00:15:50.598 "base_bdevs_list": [ 00:15:50.598 { 00:15:50.598 "name": null, 00:15:50.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.598 "is_configured": false, 00:15:50.598 "data_offset": 0, 00:15:50.598 "data_size": 65536 00:15:50.598 }, 00:15:50.598 { 00:15:50.598 "name": "BaseBdev2", 00:15:50.599 "uuid": "3be9a1da-1e12-5645-b36c-42374d422391", 00:15:50.599 "is_configured": true, 00:15:50.599 "data_offset": 0, 00:15:50.599 "data_size": 65536 00:15:50.599 }, 00:15:50.599 { 00:15:50.599 "name": "BaseBdev3", 00:15:50.599 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:50.599 "is_configured": true, 00:15:50.599 "data_offset": 0, 00:15:50.599 "data_size": 65536 00:15:50.599 }, 00:15:50.599 { 00:15:50.599 "name": "BaseBdev4", 00:15:50.599 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:50.599 "is_configured": true, 00:15:50.599 "data_offset": 0, 00:15:50.599 "data_size": 65536 00:15:50.599 } 00:15:50.599 ] 00:15:50.599 }' 00:15:50.599 16:23:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.599 16:23:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.164 "name": "raid_bdev1", 00:15:51.164 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:51.164 "strip_size_kb": 0, 00:15:51.164 "state": "online", 00:15:51.164 "raid_level": "raid1", 00:15:51.164 "superblock": false, 00:15:51.164 "num_base_bdevs": 4, 00:15:51.164 "num_base_bdevs_discovered": 3, 00:15:51.164 "num_base_bdevs_operational": 3, 00:15:51.164 "base_bdevs_list": [ 00:15:51.164 { 00:15:51.164 "name": null, 00:15:51.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.164 "is_configured": false, 00:15:51.164 "data_offset": 0, 00:15:51.164 "data_size": 65536 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "name": "BaseBdev2", 00:15:51.164 "uuid": "3be9a1da-1e12-5645-b36c-42374d422391", 00:15:51.164 "is_configured": true, 00:15:51.164 "data_offset": 0, 00:15:51.164 "data_size": 65536 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "name": "BaseBdev3", 00:15:51.164 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:51.164 "is_configured": true, 00:15:51.164 "data_offset": 0, 00:15:51.164 "data_size": 65536 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "name": "BaseBdev4", 00:15:51.164 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:51.164 "is_configured": true, 00:15:51.164 "data_offset": 0, 00:15:51.164 "data_size": 65536 00:15:51.164 } 00:15:51.164 ] 00:15:51.164 }' 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.164 16:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.421 [2024-10-08 16:23:44.491016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.421 [2024-10-08 16:23:44.504125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:51.421 16:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.421 16:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:51.421 [2024-10-08 16:23:44.506757] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.353 "name": "raid_bdev1", 00:15:52.353 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:52.353 "strip_size_kb": 0, 00:15:52.353 "state": "online", 00:15:52.353 "raid_level": "raid1", 00:15:52.353 "superblock": false, 00:15:52.353 "num_base_bdevs": 4, 00:15:52.353 "num_base_bdevs_discovered": 4, 00:15:52.353 "num_base_bdevs_operational": 4, 00:15:52.353 "process": { 00:15:52.353 "type": "rebuild", 00:15:52.353 "target": "spare", 00:15:52.353 "progress": { 00:15:52.353 "blocks": 20480, 00:15:52.353 "percent": 31 00:15:52.353 } 00:15:52.353 }, 00:15:52.353 "base_bdevs_list": [ 00:15:52.353 { 00:15:52.353 "name": "spare", 00:15:52.353 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:52.353 "is_configured": true, 00:15:52.353 "data_offset": 0, 00:15:52.353 "data_size": 65536 00:15:52.353 }, 00:15:52.353 { 00:15:52.353 "name": "BaseBdev2", 00:15:52.353 "uuid": "3be9a1da-1e12-5645-b36c-42374d422391", 00:15:52.353 "is_configured": true, 00:15:52.353 "data_offset": 0, 00:15:52.353 "data_size": 65536 00:15:52.353 }, 00:15:52.353 { 00:15:52.353 "name": "BaseBdev3", 00:15:52.353 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:52.353 "is_configured": true, 00:15:52.353 "data_offset": 0, 00:15:52.353 "data_size": 65536 00:15:52.353 }, 00:15:52.353 { 00:15:52.353 "name": "BaseBdev4", 00:15:52.353 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:52.353 "is_configured": true, 00:15:52.353 "data_offset": 0, 00:15:52.353 "data_size": 65536 00:15:52.353 } 00:15:52.353 ] 00:15:52.353 }' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.610 [2024-10-08 16:23:45.676837] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.610 [2024-10-08 16:23:45.716485] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.610 "name": "raid_bdev1", 00:15:52.610 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:52.610 "strip_size_kb": 0, 00:15:52.610 "state": "online", 00:15:52.610 "raid_level": "raid1", 00:15:52.610 "superblock": false, 00:15:52.610 "num_base_bdevs": 4, 00:15:52.610 "num_base_bdevs_discovered": 3, 00:15:52.610 "num_base_bdevs_operational": 3, 00:15:52.610 "process": { 00:15:52.610 "type": "rebuild", 00:15:52.610 "target": "spare", 00:15:52.610 "progress": { 00:15:52.610 "blocks": 24576, 00:15:52.610 "percent": 37 00:15:52.610 } 00:15:52.610 }, 00:15:52.610 "base_bdevs_list": [ 00:15:52.610 { 00:15:52.610 "name": "spare", 00:15:52.610 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:52.610 "is_configured": true, 00:15:52.610 "data_offset": 0, 00:15:52.610 "data_size": 65536 00:15:52.610 }, 00:15:52.610 { 00:15:52.610 "name": null, 00:15:52.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.610 "is_configured": false, 00:15:52.610 "data_offset": 0, 00:15:52.610 "data_size": 65536 00:15:52.610 }, 00:15:52.610 { 00:15:52.610 "name": "BaseBdev3", 00:15:52.610 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:52.610 "is_configured": true, 00:15:52.610 "data_offset": 0, 00:15:52.610 "data_size": 65536 00:15:52.610 }, 00:15:52.610 { 00:15:52.610 "name": "BaseBdev4", 00:15:52.610 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:52.610 "is_configured": true, 00:15:52.610 "data_offset": 0, 00:15:52.610 "data_size": 65536 00:15:52.610 } 00:15:52.610 ] 00:15:52.610 }' 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.610 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.611 16:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.868 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.868 "name": "raid_bdev1", 00:15:52.868 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:52.868 "strip_size_kb": 0, 00:15:52.868 "state": "online", 00:15:52.868 "raid_level": "raid1", 00:15:52.868 "superblock": false, 00:15:52.868 "num_base_bdevs": 4, 00:15:52.868 "num_base_bdevs_discovered": 3, 00:15:52.868 "num_base_bdevs_operational": 3, 00:15:52.868 "process": { 00:15:52.868 "type": "rebuild", 00:15:52.868 "target": "spare", 00:15:52.868 "progress": { 00:15:52.868 "blocks": 26624, 00:15:52.868 "percent": 40 00:15:52.868 } 00:15:52.868 }, 00:15:52.868 "base_bdevs_list": [ 00:15:52.868 { 00:15:52.868 "name": "spare", 00:15:52.868 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:52.868 "is_configured": true, 00:15:52.868 "data_offset": 0, 00:15:52.868 "data_size": 65536 00:15:52.868 }, 00:15:52.868 { 00:15:52.868 "name": null, 00:15:52.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.868 "is_configured": false, 00:15:52.868 "data_offset": 0, 00:15:52.868 "data_size": 65536 00:15:52.868 }, 00:15:52.868 { 00:15:52.868 "name": "BaseBdev3", 00:15:52.868 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:52.868 "is_configured": true, 00:15:52.868 "data_offset": 0, 00:15:52.868 "data_size": 65536 00:15:52.868 }, 00:15:52.868 { 00:15:52.868 "name": "BaseBdev4", 00:15:52.868 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:52.868 "is_configured": true, 00:15:52.868 "data_offset": 0, 00:15:52.868 "data_size": 65536 00:15:52.868 } 00:15:52.868 ] 00:15:52.868 }' 00:15:52.868 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.868 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.868 16:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.868 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.868 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.801 "name": "raid_bdev1", 00:15:53.801 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:53.801 "strip_size_kb": 0, 00:15:53.801 "state": "online", 00:15:53.801 "raid_level": "raid1", 00:15:53.801 "superblock": false, 00:15:53.801 "num_base_bdevs": 4, 00:15:53.801 "num_base_bdevs_discovered": 3, 00:15:53.801 "num_base_bdevs_operational": 3, 00:15:53.801 "process": { 00:15:53.801 "type": "rebuild", 00:15:53.801 "target": "spare", 00:15:53.801 "progress": { 00:15:53.801 "blocks": 51200, 00:15:53.801 "percent": 78 00:15:53.801 } 00:15:53.801 }, 00:15:53.801 "base_bdevs_list": [ 00:15:53.801 { 00:15:53.801 "name": "spare", 00:15:53.801 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:53.801 "is_configured": true, 00:15:53.801 "data_offset": 0, 00:15:53.801 "data_size": 65536 00:15:53.801 }, 00:15:53.801 { 00:15:53.801 "name": null, 00:15:53.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.801 "is_configured": false, 00:15:53.801 "data_offset": 0, 00:15:53.801 "data_size": 65536 00:15:53.801 }, 00:15:53.801 { 00:15:53.801 "name": "BaseBdev3", 00:15:53.801 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:53.801 "is_configured": true, 00:15:53.801 "data_offset": 0, 00:15:53.801 "data_size": 65536 00:15:53.801 }, 00:15:53.801 { 00:15:53.801 "name": "BaseBdev4", 00:15:53.801 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:53.801 "is_configured": true, 00:15:53.801 "data_offset": 0, 00:15:53.801 "data_size": 65536 00:15:53.801 } 00:15:53.801 ] 00:15:53.801 }' 00:15:53.801 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.059 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.059 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.059 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.059 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.624 [2024-10-08 16:23:47.732185] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:54.624 [2024-10-08 16:23:47.732319] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:54.624 [2024-10-08 16:23:47.732389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.209 "name": "raid_bdev1", 00:15:55.209 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:55.209 "strip_size_kb": 0, 00:15:55.209 "state": "online", 00:15:55.209 "raid_level": "raid1", 00:15:55.209 "superblock": false, 00:15:55.209 "num_base_bdevs": 4, 00:15:55.209 "num_base_bdevs_discovered": 3, 00:15:55.209 "num_base_bdevs_operational": 3, 00:15:55.209 "base_bdevs_list": [ 00:15:55.209 { 00:15:55.209 "name": "spare", 00:15:55.209 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:55.209 "is_configured": true, 00:15:55.209 "data_offset": 0, 00:15:55.209 "data_size": 65536 00:15:55.209 }, 00:15:55.209 { 00:15:55.209 "name": null, 00:15:55.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.209 "is_configured": false, 00:15:55.209 "data_offset": 0, 00:15:55.209 "data_size": 65536 00:15:55.209 }, 00:15:55.209 { 00:15:55.209 "name": "BaseBdev3", 00:15:55.209 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:55.209 "is_configured": true, 00:15:55.209 "data_offset": 0, 00:15:55.209 "data_size": 65536 00:15:55.209 }, 00:15:55.209 { 00:15:55.209 "name": "BaseBdev4", 00:15:55.209 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:55.209 "is_configured": true, 00:15:55.209 "data_offset": 0, 00:15:55.209 "data_size": 65536 00:15:55.209 } 00:15:55.209 ] 00:15:55.209 }' 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.209 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.209 "name": "raid_bdev1", 00:15:55.209 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:55.210 "strip_size_kb": 0, 00:15:55.210 "state": "online", 00:15:55.210 "raid_level": "raid1", 00:15:55.210 "superblock": false, 00:15:55.210 "num_base_bdevs": 4, 00:15:55.210 "num_base_bdevs_discovered": 3, 00:15:55.210 "num_base_bdevs_operational": 3, 00:15:55.210 "base_bdevs_list": [ 00:15:55.210 { 00:15:55.210 "name": "spare", 00:15:55.210 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:55.210 "is_configured": true, 00:15:55.210 "data_offset": 0, 00:15:55.210 "data_size": 65536 00:15:55.210 }, 00:15:55.210 { 00:15:55.210 "name": null, 00:15:55.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.210 "is_configured": false, 00:15:55.210 "data_offset": 0, 00:15:55.210 "data_size": 65536 00:15:55.210 }, 00:15:55.210 { 00:15:55.210 "name": "BaseBdev3", 00:15:55.210 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:55.210 "is_configured": true, 00:15:55.210 "data_offset": 0, 00:15:55.210 "data_size": 65536 00:15:55.210 }, 00:15:55.210 { 00:15:55.210 "name": "BaseBdev4", 00:15:55.210 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:55.210 "is_configured": true, 00:15:55.210 "data_offset": 0, 00:15:55.210 "data_size": 65536 00:15:55.210 } 00:15:55.210 ] 00:15:55.210 }' 00:15:55.210 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.210 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.210 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.467 "name": "raid_bdev1", 00:15:55.467 "uuid": "8ba79f29-c2f5-4b69-8211-b2e494ac3cb8", 00:15:55.467 "strip_size_kb": 0, 00:15:55.467 "state": "online", 00:15:55.467 "raid_level": "raid1", 00:15:55.467 "superblock": false, 00:15:55.467 "num_base_bdevs": 4, 00:15:55.467 "num_base_bdevs_discovered": 3, 00:15:55.467 "num_base_bdevs_operational": 3, 00:15:55.467 "base_bdevs_list": [ 00:15:55.467 { 00:15:55.467 "name": "spare", 00:15:55.467 "uuid": "a8f2b680-aec2-5c9b-b1de-f183459796d8", 00:15:55.467 "is_configured": true, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 }, 00:15:55.467 { 00:15:55.467 "name": null, 00:15:55.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.467 "is_configured": false, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 }, 00:15:55.467 { 00:15:55.467 "name": "BaseBdev3", 00:15:55.467 "uuid": "6e03f5c2-3dd6-5bfb-b8f7-8beb5ab9dff0", 00:15:55.467 "is_configured": true, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 }, 00:15:55.467 { 00:15:55.467 "name": "BaseBdev4", 00:15:55.467 "uuid": "9089f2de-9e0a-5b63-8156-6d1b3c8a45b1", 00:15:55.467 "is_configured": true, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 } 00:15:55.467 ] 00:15:55.467 }' 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.467 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.034 [2024-10-08 16:23:49.075213] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.034 [2024-10-08 16:23:49.075287] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.034 [2024-10-08 16:23:49.075390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.034 [2024-10-08 16:23:49.075500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.034 [2024-10-08 16:23:49.075542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.034 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:56.294 /dev/nbd0 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.294 1+0 records in 00:15:56.294 1+0 records out 00:15:56.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307309 s, 13.3 MB/s 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.294 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:56.552 /dev/nbd1 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.552 1+0 records in 00:15:56.552 1+0 records out 00:15:56.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406586 s, 10.1 MB/s 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.552 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.811 16:23:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.069 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78134 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 78134 ']' 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 78134 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78134 00:15:57.339 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.340 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.340 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78134' 00:15:57.340 killing process with pid 78134 00:15:57.340 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 78134 00:15:57.340 Received shutdown signal, test time was about 60.000000 seconds 00:15:57.340 00:15:57.340 Latency(us) 00:15:57.340 [2024-10-08T16:23:50.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.340 [2024-10-08T16:23:50.662Z] =================================================================================================================== 00:15:57.340 [2024-10-08T16:23:50.662Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:57.340 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 78134 00:15:57.340 [2024-10-08 16:23:50.634761] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.907 [2024-10-08 16:23:51.065821] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.280 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:59.280 00:15:59.280 real 0m21.655s 00:15:59.280 user 0m23.979s 00:15:59.280 sys 0m3.960s 00:15:59.280 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.280 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.280 ************************************ 00:15:59.280 END TEST raid_rebuild_test 00:15:59.280 ************************************ 00:15:59.281 16:23:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:59.281 16:23:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:59.281 16:23:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.281 16:23:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.281 ************************************ 00:15:59.281 START TEST raid_rebuild_test_sb 00:15:59.281 ************************************ 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78619 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78619 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78619 ']' 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.281 16:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.281 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:59.281 Zero copy mechanism will not be used. 00:15:59.281 [2024-10-08 16:23:52.451264] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:15:59.281 [2024-10-08 16:23:52.451443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78619 ] 00:15:59.561 [2024-10-08 16:23:52.627940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.820 [2024-10-08 16:23:52.883897] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.820 [2024-10-08 16:23:53.099068] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.820 [2024-10-08 16:23:53.099179] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 BaseBdev1_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 [2024-10-08 16:23:53.451269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:00.386 [2024-10-08 16:23:53.451352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.386 [2024-10-08 16:23:53.451382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:00.386 [2024-10-08 16:23:53.451404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.386 [2024-10-08 16:23:53.454224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.386 [2024-10-08 16:23:53.454408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:00.386 BaseBdev1 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 BaseBdev2_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 [2024-10-08 16:23:53.517540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:00.386 [2024-10-08 16:23:53.517621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.386 [2024-10-08 16:23:53.517653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:00.386 [2024-10-08 16:23:53.517670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.386 [2024-10-08 16:23:53.520392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.386 [2024-10-08 16:23:53.520453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:00.386 BaseBdev2 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 BaseBdev3_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 [2024-10-08 16:23:53.571459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:00.386 [2024-10-08 16:23:53.571553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.386 [2024-10-08 16:23:53.571584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:00.386 [2024-10-08 16:23:53.571601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.386 [2024-10-08 16:23:53.574294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.386 [2024-10-08 16:23:53.574348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:00.386 BaseBdev3 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 BaseBdev4_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 [2024-10-08 16:23:53.631313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:00.386 [2024-10-08 16:23:53.631408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.386 [2024-10-08 16:23:53.631440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:00.386 [2024-10-08 16:23:53.631457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.386 [2024-10-08 16:23:53.634352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.386 [2024-10-08 16:23:53.634435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:00.386 BaseBdev4 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 spare_malloc 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 spare_delay 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 [2024-10-08 16:23:53.696401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:00.386 [2024-10-08 16:23:53.696492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.386 [2024-10-08 16:23:53.696539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:00.386 [2024-10-08 16:23:53.696560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.386 [2024-10-08 16:23:53.699345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.386 [2024-10-08 16:23:53.699395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:00.386 spare 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.386 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 [2024-10-08 16:23:53.704477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.386 [2024-10-08 16:23:53.706844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.386 [2024-10-08 16:23:53.706968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.386 [2024-10-08 16:23:53.707045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.386 [2024-10-08 16:23:53.707328] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:00.386 [2024-10-08 16:23:53.707360] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.386 [2024-10-08 16:23:53.707695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:00.645 [2024-10-08 16:23:53.707955] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:00.645 [2024-10-08 16:23:53.707973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:00.645 [2024-10-08 16:23:53.708157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.645 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.645 "name": "raid_bdev1", 00:16:00.645 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:00.645 "strip_size_kb": 0, 00:16:00.645 "state": "online", 00:16:00.645 "raid_level": "raid1", 00:16:00.645 "superblock": true, 00:16:00.645 "num_base_bdevs": 4, 00:16:00.645 "num_base_bdevs_discovered": 4, 00:16:00.645 "num_base_bdevs_operational": 4, 00:16:00.645 "base_bdevs_list": [ 00:16:00.645 { 00:16:00.645 "name": "BaseBdev1", 00:16:00.645 "uuid": "1651ca46-892d-5f9a-89ad-6dbd2ac49043", 00:16:00.645 "is_configured": true, 00:16:00.645 "data_offset": 2048, 00:16:00.645 "data_size": 63488 00:16:00.645 }, 00:16:00.645 { 00:16:00.645 "name": "BaseBdev2", 00:16:00.645 "uuid": "156ead4e-06e2-5eac-8e27-7e4702c5fb3e", 00:16:00.645 "is_configured": true, 00:16:00.645 "data_offset": 2048, 00:16:00.646 "data_size": 63488 00:16:00.646 }, 00:16:00.646 { 00:16:00.646 "name": "BaseBdev3", 00:16:00.646 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:00.646 "is_configured": true, 00:16:00.646 "data_offset": 2048, 00:16:00.646 "data_size": 63488 00:16:00.646 }, 00:16:00.646 { 00:16:00.646 "name": "BaseBdev4", 00:16:00.646 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:00.646 "is_configured": true, 00:16:00.646 "data_offset": 2048, 00:16:00.646 "data_size": 63488 00:16:00.646 } 00:16:00.646 ] 00:16:00.646 }' 00:16:00.646 16:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.646 16:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:01.212 [2024-10-08 16:23:54.245121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.212 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:01.470 [2024-10-08 16:23:54.656875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:01.470 /dev/nbd0 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.470 1+0 records in 00:16:01.470 1+0 records out 00:16:01.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447634 s, 9.2 MB/s 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:01.470 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:11.441 63488+0 records in 00:16:11.441 63488+0 records out 00:16:11.442 32505856 bytes (33 MB, 31 MiB) copied, 8.82553 s, 3.7 MB/s 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.442 [2024-10-08 16:24:03.823554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 [2024-10-08 16:24:03.835712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.442 "name": "raid_bdev1", 00:16:11.442 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:11.442 "strip_size_kb": 0, 00:16:11.442 "state": "online", 00:16:11.442 "raid_level": "raid1", 00:16:11.442 "superblock": true, 00:16:11.442 "num_base_bdevs": 4, 00:16:11.442 "num_base_bdevs_discovered": 3, 00:16:11.442 "num_base_bdevs_operational": 3, 00:16:11.442 "base_bdevs_list": [ 00:16:11.442 { 00:16:11.442 "name": null, 00:16:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.442 "is_configured": false, 00:16:11.442 "data_offset": 0, 00:16:11.442 "data_size": 63488 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev2", 00:16:11.442 "uuid": "156ead4e-06e2-5eac-8e27-7e4702c5fb3e", 00:16:11.442 "is_configured": true, 00:16:11.442 "data_offset": 2048, 00:16:11.442 "data_size": 63488 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev3", 00:16:11.442 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:11.442 "is_configured": true, 00:16:11.442 "data_offset": 2048, 00:16:11.442 "data_size": 63488 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev4", 00:16:11.442 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:11.442 "is_configured": true, 00:16:11.442 "data_offset": 2048, 00:16:11.442 "data_size": 63488 00:16:11.442 } 00:16:11.442 ] 00:16:11.442 }' 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.442 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.442 16:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.442 16:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 [2024-10-08 16:24:04.359886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.442 [2024-10-08 16:24:04.374219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:11.442 16:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.442 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:11.442 [2024-10-08 16:24:04.376863] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.378 "name": "raid_bdev1", 00:16:12.378 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:12.378 "strip_size_kb": 0, 00:16:12.378 "state": "online", 00:16:12.378 "raid_level": "raid1", 00:16:12.378 "superblock": true, 00:16:12.378 "num_base_bdevs": 4, 00:16:12.378 "num_base_bdevs_discovered": 4, 00:16:12.378 "num_base_bdevs_operational": 4, 00:16:12.378 "process": { 00:16:12.378 "type": "rebuild", 00:16:12.378 "target": "spare", 00:16:12.378 "progress": { 00:16:12.378 "blocks": 20480, 00:16:12.378 "percent": 32 00:16:12.378 } 00:16:12.378 }, 00:16:12.378 "base_bdevs_list": [ 00:16:12.378 { 00:16:12.378 "name": "spare", 00:16:12.378 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev2", 00:16:12.378 "uuid": "156ead4e-06e2-5eac-8e27-7e4702c5fb3e", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev3", 00:16:12.378 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev4", 00:16:12.378 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 } 00:16:12.378 ] 00:16:12.378 }' 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 [2024-10-08 16:24:05.546561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.378 [2024-10-08 16:24:05.586567] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.378 [2024-10-08 16:24:05.586674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.378 [2024-10-08 16:24:05.586701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.378 [2024-10-08 16:24:05.586716] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.378 "name": "raid_bdev1", 00:16:12.378 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:12.378 "strip_size_kb": 0, 00:16:12.378 "state": "online", 00:16:12.378 "raid_level": "raid1", 00:16:12.378 "superblock": true, 00:16:12.378 "num_base_bdevs": 4, 00:16:12.378 "num_base_bdevs_discovered": 3, 00:16:12.378 "num_base_bdevs_operational": 3, 00:16:12.378 "base_bdevs_list": [ 00:16:12.378 { 00:16:12.378 "name": null, 00:16:12.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.378 "is_configured": false, 00:16:12.378 "data_offset": 0, 00:16:12.378 "data_size": 63488 00:16:12.378 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev2", 00:16:12.378 "uuid": "156ead4e-06e2-5eac-8e27-7e4702c5fb3e", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev3", 00:16:12.378 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev4", 00:16:12.378 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 } 00:16:12.378 ] 00:16:12.378 }' 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.378 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.945 "name": "raid_bdev1", 00:16:12.945 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:12.945 "strip_size_kb": 0, 00:16:12.945 "state": "online", 00:16:12.945 "raid_level": "raid1", 00:16:12.945 "superblock": true, 00:16:12.945 "num_base_bdevs": 4, 00:16:12.945 "num_base_bdevs_discovered": 3, 00:16:12.945 "num_base_bdevs_operational": 3, 00:16:12.945 "base_bdevs_list": [ 00:16:12.945 { 00:16:12.945 "name": null, 00:16:12.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.945 "is_configured": false, 00:16:12.945 "data_offset": 0, 00:16:12.945 "data_size": 63488 00:16:12.945 }, 00:16:12.945 { 00:16:12.945 "name": "BaseBdev2", 00:16:12.945 "uuid": "156ead4e-06e2-5eac-8e27-7e4702c5fb3e", 00:16:12.945 "is_configured": true, 00:16:12.945 "data_offset": 2048, 00:16:12.945 "data_size": 63488 00:16:12.945 }, 00:16:12.945 { 00:16:12.945 "name": "BaseBdev3", 00:16:12.945 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:12.945 "is_configured": true, 00:16:12.945 "data_offset": 2048, 00:16:12.945 "data_size": 63488 00:16:12.945 }, 00:16:12.945 { 00:16:12.945 "name": "BaseBdev4", 00:16:12.945 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:12.945 "is_configured": true, 00:16:12.945 "data_offset": 2048, 00:16:12.945 "data_size": 63488 00:16:12.945 } 00:16:12.945 ] 00:16:12.945 }' 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.945 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.204 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.204 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.204 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.204 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.204 [2024-10-08 16:24:06.277404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.204 [2024-10-08 16:24:06.289832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:13.204 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.204 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:13.204 [2024-10-08 16:24:06.292292] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.140 "name": "raid_bdev1", 00:16:14.140 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:14.140 "strip_size_kb": 0, 00:16:14.140 "state": "online", 00:16:14.140 "raid_level": "raid1", 00:16:14.140 "superblock": true, 00:16:14.140 "num_base_bdevs": 4, 00:16:14.140 "num_base_bdevs_discovered": 4, 00:16:14.140 "num_base_bdevs_operational": 4, 00:16:14.140 "process": { 00:16:14.140 "type": "rebuild", 00:16:14.140 "target": "spare", 00:16:14.140 "progress": { 00:16:14.140 "blocks": 20480, 00:16:14.140 "percent": 32 00:16:14.140 } 00:16:14.140 }, 00:16:14.140 "base_bdevs_list": [ 00:16:14.140 { 00:16:14.140 "name": "spare", 00:16:14.140 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:14.140 "is_configured": true, 00:16:14.140 "data_offset": 2048, 00:16:14.140 "data_size": 63488 00:16:14.140 }, 00:16:14.140 { 00:16:14.140 "name": "BaseBdev2", 00:16:14.140 "uuid": "156ead4e-06e2-5eac-8e27-7e4702c5fb3e", 00:16:14.140 "is_configured": true, 00:16:14.140 "data_offset": 2048, 00:16:14.140 "data_size": 63488 00:16:14.140 }, 00:16:14.140 { 00:16:14.140 "name": "BaseBdev3", 00:16:14.140 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:14.140 "is_configured": true, 00:16:14.140 "data_offset": 2048, 00:16:14.140 "data_size": 63488 00:16:14.140 }, 00:16:14.140 { 00:16:14.140 "name": "BaseBdev4", 00:16:14.140 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:14.140 "is_configured": true, 00:16:14.140 "data_offset": 2048, 00:16:14.140 "data_size": 63488 00:16:14.140 } 00:16:14.140 ] 00:16:14.140 }' 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.140 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.399 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.399 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:14.399 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:14.399 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:14.399 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.400 [2024-10-08 16:24:07.474500] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.400 [2024-10-08 16:24:07.602258] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.400 "name": "raid_bdev1", 00:16:14.400 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:14.400 "strip_size_kb": 0, 00:16:14.400 "state": "online", 00:16:14.400 "raid_level": "raid1", 00:16:14.400 "superblock": true, 00:16:14.400 "num_base_bdevs": 4, 00:16:14.400 "num_base_bdevs_discovered": 3, 00:16:14.400 "num_base_bdevs_operational": 3, 00:16:14.400 "process": { 00:16:14.400 "type": "rebuild", 00:16:14.400 "target": "spare", 00:16:14.400 "progress": { 00:16:14.400 "blocks": 24576, 00:16:14.400 "percent": 38 00:16:14.400 } 00:16:14.400 }, 00:16:14.400 "base_bdevs_list": [ 00:16:14.400 { 00:16:14.400 "name": "spare", 00:16:14.400 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:14.400 "is_configured": true, 00:16:14.400 "data_offset": 2048, 00:16:14.400 "data_size": 63488 00:16:14.400 }, 00:16:14.400 { 00:16:14.400 "name": null, 00:16:14.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.400 "is_configured": false, 00:16:14.400 "data_offset": 0, 00:16:14.400 "data_size": 63488 00:16:14.400 }, 00:16:14.400 { 00:16:14.400 "name": "BaseBdev3", 00:16:14.400 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:14.400 "is_configured": true, 00:16:14.400 "data_offset": 2048, 00:16:14.400 "data_size": 63488 00:16:14.400 }, 00:16:14.400 { 00:16:14.400 "name": "BaseBdev4", 00:16:14.400 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:14.400 "is_configured": true, 00:16:14.400 "data_offset": 2048, 00:16:14.400 "data_size": 63488 00:16:14.400 } 00:16:14.400 ] 00:16:14.400 }' 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.400 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=517 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.658 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.659 "name": "raid_bdev1", 00:16:14.659 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:14.659 "strip_size_kb": 0, 00:16:14.659 "state": "online", 00:16:14.659 "raid_level": "raid1", 00:16:14.659 "superblock": true, 00:16:14.659 "num_base_bdevs": 4, 00:16:14.659 "num_base_bdevs_discovered": 3, 00:16:14.659 "num_base_bdevs_operational": 3, 00:16:14.659 "process": { 00:16:14.659 "type": "rebuild", 00:16:14.659 "target": "spare", 00:16:14.659 "progress": { 00:16:14.659 "blocks": 26624, 00:16:14.659 "percent": 41 00:16:14.659 } 00:16:14.659 }, 00:16:14.659 "base_bdevs_list": [ 00:16:14.659 { 00:16:14.659 "name": "spare", 00:16:14.659 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:14.659 "is_configured": true, 00:16:14.659 "data_offset": 2048, 00:16:14.659 "data_size": 63488 00:16:14.659 }, 00:16:14.659 { 00:16:14.659 "name": null, 00:16:14.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.659 "is_configured": false, 00:16:14.659 "data_offset": 0, 00:16:14.659 "data_size": 63488 00:16:14.659 }, 00:16:14.659 { 00:16:14.659 "name": "BaseBdev3", 00:16:14.659 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:14.659 "is_configured": true, 00:16:14.659 "data_offset": 2048, 00:16:14.659 "data_size": 63488 00:16:14.659 }, 00:16:14.659 { 00:16:14.659 "name": "BaseBdev4", 00:16:14.659 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:14.659 "is_configured": true, 00:16:14.659 "data_offset": 2048, 00:16:14.659 "data_size": 63488 00:16:14.659 } 00:16:14.659 ] 00:16:14.659 }' 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.659 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.033 "name": "raid_bdev1", 00:16:16.033 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:16.033 "strip_size_kb": 0, 00:16:16.033 "state": "online", 00:16:16.033 "raid_level": "raid1", 00:16:16.033 "superblock": true, 00:16:16.033 "num_base_bdevs": 4, 00:16:16.033 "num_base_bdevs_discovered": 3, 00:16:16.033 "num_base_bdevs_operational": 3, 00:16:16.033 "process": { 00:16:16.033 "type": "rebuild", 00:16:16.033 "target": "spare", 00:16:16.033 "progress": { 00:16:16.033 "blocks": 51200, 00:16:16.033 "percent": 80 00:16:16.033 } 00:16:16.033 }, 00:16:16.033 "base_bdevs_list": [ 00:16:16.033 { 00:16:16.033 "name": "spare", 00:16:16.033 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:16.033 "is_configured": true, 00:16:16.033 "data_offset": 2048, 00:16:16.033 "data_size": 63488 00:16:16.033 }, 00:16:16.033 { 00:16:16.033 "name": null, 00:16:16.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.033 "is_configured": false, 00:16:16.033 "data_offset": 0, 00:16:16.033 "data_size": 63488 00:16:16.033 }, 00:16:16.033 { 00:16:16.033 "name": "BaseBdev3", 00:16:16.033 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:16.033 "is_configured": true, 00:16:16.033 "data_offset": 2048, 00:16:16.033 "data_size": 63488 00:16:16.033 }, 00:16:16.033 { 00:16:16.033 "name": "BaseBdev4", 00:16:16.033 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:16.033 "is_configured": true, 00:16:16.033 "data_offset": 2048, 00:16:16.033 "data_size": 63488 00:16:16.033 } 00:16:16.033 ] 00:16:16.033 }' 00:16:16.033 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.033 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.033 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.033 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.033 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.291 [2024-10-08 16:24:09.517109] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:16.291 [2024-10-08 16:24:09.517211] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:16.291 [2024-10-08 16:24:09.517383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.858 "name": "raid_bdev1", 00:16:16.858 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:16.858 "strip_size_kb": 0, 00:16:16.858 "state": "online", 00:16:16.858 "raid_level": "raid1", 00:16:16.858 "superblock": true, 00:16:16.858 "num_base_bdevs": 4, 00:16:16.858 "num_base_bdevs_discovered": 3, 00:16:16.858 "num_base_bdevs_operational": 3, 00:16:16.858 "base_bdevs_list": [ 00:16:16.858 { 00:16:16.858 "name": "spare", 00:16:16.858 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:16.858 "is_configured": true, 00:16:16.858 "data_offset": 2048, 00:16:16.858 "data_size": 63488 00:16:16.858 }, 00:16:16.858 { 00:16:16.858 "name": null, 00:16:16.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.858 "is_configured": false, 00:16:16.858 "data_offset": 0, 00:16:16.858 "data_size": 63488 00:16:16.858 }, 00:16:16.858 { 00:16:16.858 "name": "BaseBdev3", 00:16:16.858 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:16.858 "is_configured": true, 00:16:16.858 "data_offset": 2048, 00:16:16.858 "data_size": 63488 00:16:16.858 }, 00:16:16.858 { 00:16:16.858 "name": "BaseBdev4", 00:16:16.858 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:16.858 "is_configured": true, 00:16:16.858 "data_offset": 2048, 00:16:16.858 "data_size": 63488 00:16:16.858 } 00:16:16.858 ] 00:16:16.858 }' 00:16:16.858 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.147 "name": "raid_bdev1", 00:16:17.147 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:17.147 "strip_size_kb": 0, 00:16:17.147 "state": "online", 00:16:17.147 "raid_level": "raid1", 00:16:17.147 "superblock": true, 00:16:17.147 "num_base_bdevs": 4, 00:16:17.147 "num_base_bdevs_discovered": 3, 00:16:17.147 "num_base_bdevs_operational": 3, 00:16:17.147 "base_bdevs_list": [ 00:16:17.147 { 00:16:17.147 "name": "spare", 00:16:17.147 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:17.147 "is_configured": true, 00:16:17.147 "data_offset": 2048, 00:16:17.147 "data_size": 63488 00:16:17.147 }, 00:16:17.147 { 00:16:17.147 "name": null, 00:16:17.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.147 "is_configured": false, 00:16:17.147 "data_offset": 0, 00:16:17.147 "data_size": 63488 00:16:17.147 }, 00:16:17.147 { 00:16:17.147 "name": "BaseBdev3", 00:16:17.147 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:17.147 "is_configured": true, 00:16:17.147 "data_offset": 2048, 00:16:17.147 "data_size": 63488 00:16:17.147 }, 00:16:17.147 { 00:16:17.147 "name": "BaseBdev4", 00:16:17.147 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:17.147 "is_configured": true, 00:16:17.147 "data_offset": 2048, 00:16:17.147 "data_size": 63488 00:16:17.147 } 00:16:17.147 ] 00:16:17.147 }' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.147 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.406 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.406 "name": "raid_bdev1", 00:16:17.406 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:17.406 "strip_size_kb": 0, 00:16:17.406 "state": "online", 00:16:17.406 "raid_level": "raid1", 00:16:17.406 "superblock": true, 00:16:17.406 "num_base_bdevs": 4, 00:16:17.406 "num_base_bdevs_discovered": 3, 00:16:17.406 "num_base_bdevs_operational": 3, 00:16:17.406 "base_bdevs_list": [ 00:16:17.406 { 00:16:17.406 "name": "spare", 00:16:17.406 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:17.406 "is_configured": true, 00:16:17.406 "data_offset": 2048, 00:16:17.406 "data_size": 63488 00:16:17.406 }, 00:16:17.406 { 00:16:17.406 "name": null, 00:16:17.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.406 "is_configured": false, 00:16:17.406 "data_offset": 0, 00:16:17.406 "data_size": 63488 00:16:17.406 }, 00:16:17.406 { 00:16:17.406 "name": "BaseBdev3", 00:16:17.406 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:17.406 "is_configured": true, 00:16:17.406 "data_offset": 2048, 00:16:17.406 "data_size": 63488 00:16:17.406 }, 00:16:17.406 { 00:16:17.406 "name": "BaseBdev4", 00:16:17.406 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:17.406 "is_configured": true, 00:16:17.406 "data_offset": 2048, 00:16:17.406 "data_size": 63488 00:16:17.406 } 00:16:17.406 ] 00:16:17.406 }' 00:16:17.406 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.406 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.665 [2024-10-08 16:24:10.940831] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.665 [2024-10-08 16:24:10.940886] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.665 [2024-10-08 16:24:10.940997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.665 [2024-10-08 16:24:10.941110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.665 [2024-10-08 16:24:10.941135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:17.665 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.923 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.923 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:18.181 /dev/nbd0 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.181 1+0 records in 00:16:18.181 1+0 records out 00:16:18.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522357 s, 7.8 MB/s 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.181 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:18.439 /dev/nbd1 00:16:18.439 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:18.439 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:18.439 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:18.439 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:18.439 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:18.439 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.440 1+0 records in 00:16:18.440 1+0 records out 00:16:18.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265292 s, 15.4 MB/s 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.440 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.697 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.954 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:19.212 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.213 [2024-10-08 16:24:12.528758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.213 [2024-10-08 16:24:12.529150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.213 [2024-10-08 16:24:12.529287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:19.213 [2024-10-08 16:24:12.529394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.213 [2024-10-08 16:24:12.532366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.213 [2024-10-08 16:24:12.532534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.213 [2024-10-08 16:24:12.532747] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.213 [2024-10-08 16:24:12.532828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.213 [2024-10-08 16:24:12.533066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.213 [2024-10-08 16:24:12.533222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.213 spare 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.213 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.471 [2024-10-08 16:24:12.633378] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:19.471 [2024-10-08 16:24:12.633436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:19.471 [2024-10-08 16:24:12.633985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:19.471 [2024-10-08 16:24:12.634247] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:19.471 [2024-10-08 16:24:12.634281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:19.471 [2024-10-08 16:24:12.634517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.471 "name": "raid_bdev1", 00:16:19.471 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:19.471 "strip_size_kb": 0, 00:16:19.471 "state": "online", 00:16:19.471 "raid_level": "raid1", 00:16:19.471 "superblock": true, 00:16:19.471 "num_base_bdevs": 4, 00:16:19.471 "num_base_bdevs_discovered": 3, 00:16:19.471 "num_base_bdevs_operational": 3, 00:16:19.471 "base_bdevs_list": [ 00:16:19.471 { 00:16:19.471 "name": "spare", 00:16:19.471 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:19.471 "is_configured": true, 00:16:19.471 "data_offset": 2048, 00:16:19.471 "data_size": 63488 00:16:19.471 }, 00:16:19.471 { 00:16:19.471 "name": null, 00:16:19.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.471 "is_configured": false, 00:16:19.471 "data_offset": 2048, 00:16:19.471 "data_size": 63488 00:16:19.471 }, 00:16:19.471 { 00:16:19.471 "name": "BaseBdev3", 00:16:19.471 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:19.471 "is_configured": true, 00:16:19.471 "data_offset": 2048, 00:16:19.471 "data_size": 63488 00:16:19.471 }, 00:16:19.471 { 00:16:19.471 "name": "BaseBdev4", 00:16:19.471 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:19.471 "is_configured": true, 00:16:19.471 "data_offset": 2048, 00:16:19.471 "data_size": 63488 00:16:19.471 } 00:16:19.471 ] 00:16:19.471 }' 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.471 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.037 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.038 "name": "raid_bdev1", 00:16:20.038 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:20.038 "strip_size_kb": 0, 00:16:20.038 "state": "online", 00:16:20.038 "raid_level": "raid1", 00:16:20.038 "superblock": true, 00:16:20.038 "num_base_bdevs": 4, 00:16:20.038 "num_base_bdevs_discovered": 3, 00:16:20.038 "num_base_bdevs_operational": 3, 00:16:20.038 "base_bdevs_list": [ 00:16:20.038 { 00:16:20.038 "name": "spare", 00:16:20.038 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:20.038 "is_configured": true, 00:16:20.038 "data_offset": 2048, 00:16:20.038 "data_size": 63488 00:16:20.038 }, 00:16:20.038 { 00:16:20.038 "name": null, 00:16:20.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.038 "is_configured": false, 00:16:20.038 "data_offset": 2048, 00:16:20.038 "data_size": 63488 00:16:20.038 }, 00:16:20.038 { 00:16:20.038 "name": "BaseBdev3", 00:16:20.038 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:20.038 "is_configured": true, 00:16:20.038 "data_offset": 2048, 00:16:20.038 "data_size": 63488 00:16:20.038 }, 00:16:20.038 { 00:16:20.038 "name": "BaseBdev4", 00:16:20.038 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:20.038 "is_configured": true, 00:16:20.038 "data_offset": 2048, 00:16:20.038 "data_size": 63488 00:16:20.038 } 00:16:20.038 ] 00:16:20.038 }' 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.296 [2024-10-08 16:24:13.377096] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.296 "name": "raid_bdev1", 00:16:20.296 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:20.296 "strip_size_kb": 0, 00:16:20.296 "state": "online", 00:16:20.296 "raid_level": "raid1", 00:16:20.296 "superblock": true, 00:16:20.296 "num_base_bdevs": 4, 00:16:20.296 "num_base_bdevs_discovered": 2, 00:16:20.296 "num_base_bdevs_operational": 2, 00:16:20.296 "base_bdevs_list": [ 00:16:20.296 { 00:16:20.296 "name": null, 00:16:20.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.296 "is_configured": false, 00:16:20.296 "data_offset": 0, 00:16:20.296 "data_size": 63488 00:16:20.296 }, 00:16:20.296 { 00:16:20.296 "name": null, 00:16:20.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.296 "is_configured": false, 00:16:20.296 "data_offset": 2048, 00:16:20.296 "data_size": 63488 00:16:20.296 }, 00:16:20.296 { 00:16:20.296 "name": "BaseBdev3", 00:16:20.296 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:20.296 "is_configured": true, 00:16:20.296 "data_offset": 2048, 00:16:20.296 "data_size": 63488 00:16:20.296 }, 00:16:20.296 { 00:16:20.296 "name": "BaseBdev4", 00:16:20.296 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:20.296 "is_configured": true, 00:16:20.296 "data_offset": 2048, 00:16:20.296 "data_size": 63488 00:16:20.296 } 00:16:20.296 ] 00:16:20.296 }' 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.296 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.862 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.862 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.862 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.862 [2024-10-08 16:24:13.925371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.862 [2024-10-08 16:24:13.925697] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:20.862 [2024-10-08 16:24:13.925724] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:20.862 [2024-10-08 16:24:13.925777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.862 [2024-10-08 16:24:13.939702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:20.862 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.862 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:20.862 [2024-10-08 16:24:13.942414] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.796 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.796 "name": "raid_bdev1", 00:16:21.796 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:21.796 "strip_size_kb": 0, 00:16:21.796 "state": "online", 00:16:21.796 "raid_level": "raid1", 00:16:21.796 "superblock": true, 00:16:21.796 "num_base_bdevs": 4, 00:16:21.796 "num_base_bdevs_discovered": 3, 00:16:21.796 "num_base_bdevs_operational": 3, 00:16:21.796 "process": { 00:16:21.796 "type": "rebuild", 00:16:21.796 "target": "spare", 00:16:21.796 "progress": { 00:16:21.796 "blocks": 20480, 00:16:21.796 "percent": 32 00:16:21.796 } 00:16:21.796 }, 00:16:21.796 "base_bdevs_list": [ 00:16:21.796 { 00:16:21.796 "name": "spare", 00:16:21.796 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:21.796 "is_configured": true, 00:16:21.796 "data_offset": 2048, 00:16:21.796 "data_size": 63488 00:16:21.796 }, 00:16:21.796 { 00:16:21.796 "name": null, 00:16:21.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.796 "is_configured": false, 00:16:21.796 "data_offset": 2048, 00:16:21.796 "data_size": 63488 00:16:21.796 }, 00:16:21.796 { 00:16:21.796 "name": "BaseBdev3", 00:16:21.796 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:21.796 "is_configured": true, 00:16:21.796 "data_offset": 2048, 00:16:21.796 "data_size": 63488 00:16:21.797 }, 00:16:21.797 { 00:16:21.797 "name": "BaseBdev4", 00:16:21.797 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:21.797 "is_configured": true, 00:16:21.797 "data_offset": 2048, 00:16:21.797 "data_size": 63488 00:16:21.797 } 00:16:21.797 ] 00:16:21.797 }' 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.797 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.797 [2024-10-08 16:24:15.112571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.055 [2024-10-08 16:24:15.152458] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.055 [2024-10-08 16:24:15.152591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.055 [2024-10-08 16:24:15.152622] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.055 [2024-10-08 16:24:15.152634] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.055 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.056 "name": "raid_bdev1", 00:16:22.056 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:22.056 "strip_size_kb": 0, 00:16:22.056 "state": "online", 00:16:22.056 "raid_level": "raid1", 00:16:22.056 "superblock": true, 00:16:22.056 "num_base_bdevs": 4, 00:16:22.056 "num_base_bdevs_discovered": 2, 00:16:22.056 "num_base_bdevs_operational": 2, 00:16:22.056 "base_bdevs_list": [ 00:16:22.056 { 00:16:22.056 "name": null, 00:16:22.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.056 "is_configured": false, 00:16:22.056 "data_offset": 0, 00:16:22.056 "data_size": 63488 00:16:22.056 }, 00:16:22.056 { 00:16:22.056 "name": null, 00:16:22.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.056 "is_configured": false, 00:16:22.056 "data_offset": 2048, 00:16:22.056 "data_size": 63488 00:16:22.056 }, 00:16:22.056 { 00:16:22.056 "name": "BaseBdev3", 00:16:22.056 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:22.056 "is_configured": true, 00:16:22.056 "data_offset": 2048, 00:16:22.056 "data_size": 63488 00:16:22.056 }, 00:16:22.056 { 00:16:22.056 "name": "BaseBdev4", 00:16:22.056 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:22.056 "is_configured": true, 00:16:22.056 "data_offset": 2048, 00:16:22.056 "data_size": 63488 00:16:22.056 } 00:16:22.056 ] 00:16:22.056 }' 00:16:22.056 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.056 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.622 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.622 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.622 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.622 [2024-10-08 16:24:15.675480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.622 [2024-10-08 16:24:15.675612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.622 [2024-10-08 16:24:15.675650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:22.623 [2024-10-08 16:24:15.675666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.623 [2024-10-08 16:24:15.676386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.623 [2024-10-08 16:24:15.676436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.623 [2024-10-08 16:24:15.676580] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:22.623 [2024-10-08 16:24:15.676602] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:22.623 [2024-10-08 16:24:15.676619] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:22.623 [2024-10-08 16:24:15.676656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.623 [2024-10-08 16:24:15.690367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:22.623 spare 00:16:22.623 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.623 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:22.623 [2024-10-08 16:24:15.692970] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.561 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.561 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.561 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.561 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.561 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.561 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.562 "name": "raid_bdev1", 00:16:23.562 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:23.562 "strip_size_kb": 0, 00:16:23.562 "state": "online", 00:16:23.562 "raid_level": "raid1", 00:16:23.562 "superblock": true, 00:16:23.562 "num_base_bdevs": 4, 00:16:23.562 "num_base_bdevs_discovered": 3, 00:16:23.562 "num_base_bdevs_operational": 3, 00:16:23.562 "process": { 00:16:23.562 "type": "rebuild", 00:16:23.562 "target": "spare", 00:16:23.562 "progress": { 00:16:23.562 "blocks": 20480, 00:16:23.562 "percent": 32 00:16:23.562 } 00:16:23.562 }, 00:16:23.562 "base_bdevs_list": [ 00:16:23.562 { 00:16:23.562 "name": "spare", 00:16:23.562 "uuid": "324159eb-a214-51c7-82a6-61c3890bfd48", 00:16:23.562 "is_configured": true, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 }, 00:16:23.562 { 00:16:23.562 "name": null, 00:16:23.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.562 "is_configured": false, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 }, 00:16:23.562 { 00:16:23.562 "name": "BaseBdev3", 00:16:23.562 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:23.562 "is_configured": true, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 }, 00:16:23.562 { 00:16:23.562 "name": "BaseBdev4", 00:16:23.562 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:23.562 "is_configured": true, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 } 00:16:23.562 ] 00:16:23.562 }' 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.562 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.562 [2024-10-08 16:24:16.851499] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.820 [2024-10-08 16:24:16.902987] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.820 [2024-10-08 16:24:16.903081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.820 [2024-10-08 16:24:16.903105] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.820 [2024-10-08 16:24:16.903120] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.820 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.820 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.820 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.820 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.820 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.821 "name": "raid_bdev1", 00:16:23.821 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:23.821 "strip_size_kb": 0, 00:16:23.821 "state": "online", 00:16:23.821 "raid_level": "raid1", 00:16:23.821 "superblock": true, 00:16:23.821 "num_base_bdevs": 4, 00:16:23.821 "num_base_bdevs_discovered": 2, 00:16:23.821 "num_base_bdevs_operational": 2, 00:16:23.821 "base_bdevs_list": [ 00:16:23.821 { 00:16:23.821 "name": null, 00:16:23.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.821 "is_configured": false, 00:16:23.821 "data_offset": 0, 00:16:23.821 "data_size": 63488 00:16:23.821 }, 00:16:23.821 { 00:16:23.821 "name": null, 00:16:23.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.821 "is_configured": false, 00:16:23.821 "data_offset": 2048, 00:16:23.821 "data_size": 63488 00:16:23.821 }, 00:16:23.821 { 00:16:23.821 "name": "BaseBdev3", 00:16:23.821 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:23.821 "is_configured": true, 00:16:23.821 "data_offset": 2048, 00:16:23.821 "data_size": 63488 00:16:23.821 }, 00:16:23.821 { 00:16:23.821 "name": "BaseBdev4", 00:16:23.821 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:23.821 "is_configured": true, 00:16:23.821 "data_offset": 2048, 00:16:23.821 "data_size": 63488 00:16:23.821 } 00:16:23.821 ] 00:16:23.821 }' 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.821 16:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.387 "name": "raid_bdev1", 00:16:24.387 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:24.387 "strip_size_kb": 0, 00:16:24.387 "state": "online", 00:16:24.387 "raid_level": "raid1", 00:16:24.387 "superblock": true, 00:16:24.387 "num_base_bdevs": 4, 00:16:24.387 "num_base_bdevs_discovered": 2, 00:16:24.387 "num_base_bdevs_operational": 2, 00:16:24.387 "base_bdevs_list": [ 00:16:24.387 { 00:16:24.387 "name": null, 00:16:24.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.387 "is_configured": false, 00:16:24.387 "data_offset": 0, 00:16:24.387 "data_size": 63488 00:16:24.387 }, 00:16:24.387 { 00:16:24.387 "name": null, 00:16:24.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.387 "is_configured": false, 00:16:24.387 "data_offset": 2048, 00:16:24.387 "data_size": 63488 00:16:24.387 }, 00:16:24.387 { 00:16:24.387 "name": "BaseBdev3", 00:16:24.387 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:24.387 "is_configured": true, 00:16:24.387 "data_offset": 2048, 00:16:24.387 "data_size": 63488 00:16:24.387 }, 00:16:24.387 { 00:16:24.387 "name": "BaseBdev4", 00:16:24.387 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:24.387 "is_configured": true, 00:16:24.387 "data_offset": 2048, 00:16:24.387 "data_size": 63488 00:16:24.387 } 00:16:24.387 ] 00:16:24.387 }' 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.387 [2024-10-08 16:24:17.585917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.387 [2024-10-08 16:24:17.586033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.387 [2024-10-08 16:24:17.586062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:24.387 [2024-10-08 16:24:17.586079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.387 [2024-10-08 16:24:17.586750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.387 [2024-10-08 16:24:17.586805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.387 [2024-10-08 16:24:17.586908] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:24.387 [2024-10-08 16:24:17.586934] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:24.387 [2024-10-08 16:24:17.586946] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:24.387 [2024-10-08 16:24:17.586967] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:24.387 BaseBdev1 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.387 16:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.319 16:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.578 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.578 "name": "raid_bdev1", 00:16:25.578 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:25.578 "strip_size_kb": 0, 00:16:25.578 "state": "online", 00:16:25.578 "raid_level": "raid1", 00:16:25.578 "superblock": true, 00:16:25.578 "num_base_bdevs": 4, 00:16:25.578 "num_base_bdevs_discovered": 2, 00:16:25.578 "num_base_bdevs_operational": 2, 00:16:25.578 "base_bdevs_list": [ 00:16:25.578 { 00:16:25.578 "name": null, 00:16:25.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.578 "is_configured": false, 00:16:25.578 "data_offset": 0, 00:16:25.578 "data_size": 63488 00:16:25.578 }, 00:16:25.578 { 00:16:25.578 "name": null, 00:16:25.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.578 "is_configured": false, 00:16:25.578 "data_offset": 2048, 00:16:25.578 "data_size": 63488 00:16:25.578 }, 00:16:25.578 { 00:16:25.578 "name": "BaseBdev3", 00:16:25.578 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:25.578 "is_configured": true, 00:16:25.578 "data_offset": 2048, 00:16:25.578 "data_size": 63488 00:16:25.578 }, 00:16:25.578 { 00:16:25.578 "name": "BaseBdev4", 00:16:25.578 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:25.578 "is_configured": true, 00:16:25.578 "data_offset": 2048, 00:16:25.578 "data_size": 63488 00:16:25.578 } 00:16:25.578 ] 00:16:25.578 }' 00:16:25.578 16:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.578 16:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.836 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.095 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.095 "name": "raid_bdev1", 00:16:26.095 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:26.095 "strip_size_kb": 0, 00:16:26.095 "state": "online", 00:16:26.095 "raid_level": "raid1", 00:16:26.095 "superblock": true, 00:16:26.095 "num_base_bdevs": 4, 00:16:26.095 "num_base_bdevs_discovered": 2, 00:16:26.095 "num_base_bdevs_operational": 2, 00:16:26.095 "base_bdevs_list": [ 00:16:26.095 { 00:16:26.095 "name": null, 00:16:26.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.095 "is_configured": false, 00:16:26.095 "data_offset": 0, 00:16:26.095 "data_size": 63488 00:16:26.095 }, 00:16:26.095 { 00:16:26.095 "name": null, 00:16:26.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.095 "is_configured": false, 00:16:26.095 "data_offset": 2048, 00:16:26.095 "data_size": 63488 00:16:26.095 }, 00:16:26.095 { 00:16:26.095 "name": "BaseBdev3", 00:16:26.095 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:26.095 "is_configured": true, 00:16:26.095 "data_offset": 2048, 00:16:26.095 "data_size": 63488 00:16:26.095 }, 00:16:26.095 { 00:16:26.095 "name": "BaseBdev4", 00:16:26.095 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:26.095 "is_configured": true, 00:16:26.095 "data_offset": 2048, 00:16:26.096 "data_size": 63488 00:16:26.096 } 00:16:26.096 ] 00:16:26.096 }' 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.096 [2024-10-08 16:24:19.310671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.096 [2024-10-08 16:24:19.310957] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:26.096 [2024-10-08 16:24:19.310980] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.096 request: 00:16:26.096 { 00:16:26.096 "base_bdev": "BaseBdev1", 00:16:26.096 "raid_bdev": "raid_bdev1", 00:16:26.096 "method": "bdev_raid_add_base_bdev", 00:16:26.096 "req_id": 1 00:16:26.096 } 00:16:26.096 Got JSON-RPC error response 00:16:26.096 response: 00:16:26.096 { 00:16:26.096 "code": -22, 00:16:26.096 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:26.096 } 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.096 16:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.031 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.288 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.288 "name": "raid_bdev1", 00:16:27.288 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:27.288 "strip_size_kb": 0, 00:16:27.288 "state": "online", 00:16:27.288 "raid_level": "raid1", 00:16:27.288 "superblock": true, 00:16:27.288 "num_base_bdevs": 4, 00:16:27.288 "num_base_bdevs_discovered": 2, 00:16:27.288 "num_base_bdevs_operational": 2, 00:16:27.289 "base_bdevs_list": [ 00:16:27.289 { 00:16:27.289 "name": null, 00:16:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.289 "is_configured": false, 00:16:27.289 "data_offset": 0, 00:16:27.289 "data_size": 63488 00:16:27.289 }, 00:16:27.289 { 00:16:27.289 "name": null, 00:16:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.289 "is_configured": false, 00:16:27.289 "data_offset": 2048, 00:16:27.289 "data_size": 63488 00:16:27.289 }, 00:16:27.289 { 00:16:27.289 "name": "BaseBdev3", 00:16:27.289 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:27.289 "is_configured": true, 00:16:27.289 "data_offset": 2048, 00:16:27.289 "data_size": 63488 00:16:27.289 }, 00:16:27.289 { 00:16:27.289 "name": "BaseBdev4", 00:16:27.289 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:27.289 "is_configured": true, 00:16:27.289 "data_offset": 2048, 00:16:27.289 "data_size": 63488 00:16:27.289 } 00:16:27.289 ] 00:16:27.289 }' 00:16:27.289 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.289 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.547 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.805 16:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.805 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.805 "name": "raid_bdev1", 00:16:27.805 "uuid": "138edb6d-d5eb-469b-a3e1-7877d17d42c6", 00:16:27.805 "strip_size_kb": 0, 00:16:27.805 "state": "online", 00:16:27.805 "raid_level": "raid1", 00:16:27.805 "superblock": true, 00:16:27.805 "num_base_bdevs": 4, 00:16:27.805 "num_base_bdevs_discovered": 2, 00:16:27.805 "num_base_bdevs_operational": 2, 00:16:27.805 "base_bdevs_list": [ 00:16:27.805 { 00:16:27.805 "name": null, 00:16:27.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.805 "is_configured": false, 00:16:27.805 "data_offset": 0, 00:16:27.805 "data_size": 63488 00:16:27.805 }, 00:16:27.805 { 00:16:27.805 "name": null, 00:16:27.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.805 "is_configured": false, 00:16:27.805 "data_offset": 2048, 00:16:27.805 "data_size": 63488 00:16:27.805 }, 00:16:27.805 { 00:16:27.805 "name": "BaseBdev3", 00:16:27.805 "uuid": "7e6f639c-53f2-57d4-90e0-b3d25f3272fd", 00:16:27.805 "is_configured": true, 00:16:27.805 "data_offset": 2048, 00:16:27.805 "data_size": 63488 00:16:27.805 }, 00:16:27.805 { 00:16:27.805 "name": "BaseBdev4", 00:16:27.805 "uuid": "c1744a8d-ba7a-518e-8009-5ce6dcf486a6", 00:16:27.805 "is_configured": true, 00:16:27.805 "data_offset": 2048, 00:16:27.805 "data_size": 63488 00:16:27.805 } 00:16:27.805 ] 00:16:27.805 }' 00:16:27.805 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.805 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.805 16:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78619 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78619 ']' 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78619 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78619 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.805 killing process with pid 78619 00:16:27.805 Received shutdown signal, test time was about 60.000000 seconds 00:16:27.805 00:16:27.805 Latency(us) 00:16:27.805 [2024-10-08T16:24:21.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.805 [2024-10-08T16:24:21.127Z] =================================================================================================================== 00:16:27.805 [2024-10-08T16:24:21.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78619' 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78619 00:16:27.805 [2024-10-08 16:24:21.061623] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.805 16:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78619 00:16:27.805 [2024-10-08 16:24:21.061775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.805 [2024-10-08 16:24:21.061879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.805 [2024-10-08 16:24:21.061908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:28.371 [2024-10-08 16:24:21.516384] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.748 16:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.748 00:16:29.748 real 0m30.424s 00:16:29.748 user 0m36.613s 00:16:29.748 sys 0m4.729s 00:16:29.748 ************************************ 00:16:29.748 END TEST raid_rebuild_test_sb 00:16:29.748 ************************************ 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.749 16:24:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:29.749 16:24:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:29.749 16:24:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.749 16:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.749 ************************************ 00:16:29.749 START TEST raid_rebuild_test_io 00:16:29.749 ************************************ 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79423 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79423 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 79423 ']' 00:16:29.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.749 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.749 [2024-10-08 16:24:22.953009] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:16:29.749 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:29.749 Zero copy mechanism will not be used. 00:16:29.749 [2024-10-08 16:24:22.953603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79423 ] 00:16:30.007 [2024-10-08 16:24:23.134729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.266 [2024-10-08 16:24:23.377011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.266 [2024-10-08 16:24:23.586149] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.266 [2024-10-08 16:24:23.586201] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 BaseBdev1_malloc 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 [2024-10-08 16:24:23.975510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.878 [2024-10-08 16:24:23.975650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.878 [2024-10-08 16:24:23.975688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.878 [2024-10-08 16:24:23.975712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.878 [2024-10-08 16:24:23.978805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.878 [2024-10-08 16:24:23.978858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.878 BaseBdev1 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 BaseBdev2_malloc 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 [2024-10-08 16:24:24.038229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:30.878 [2024-10-08 16:24:24.038326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.878 [2024-10-08 16:24:24.038357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.878 [2024-10-08 16:24:24.038378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.878 [2024-10-08 16:24:24.041341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.878 [2024-10-08 16:24:24.041683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.878 BaseBdev2 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 BaseBdev3_malloc 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 [2024-10-08 16:24:24.091428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:30.878 [2024-10-08 16:24:24.091576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.878 [2024-10-08 16:24:24.091610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.878 [2024-10-08 16:24:24.091628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.878 [2024-10-08 16:24:24.094219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.878 [2024-10-08 16:24:24.094267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:30.878 BaseBdev3 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 BaseBdev4_malloc 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.878 [2024-10-08 16:24:24.146543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:30.878 [2024-10-08 16:24:24.146657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.878 [2024-10-08 16:24:24.146688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:30.878 [2024-10-08 16:24:24.146706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.878 [2024-10-08 16:24:24.149667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.878 [2024-10-08 16:24:24.149720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:30.878 BaseBdev4 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.878 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:30.879 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.879 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.879 spare_malloc 00:16:30.879 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.879 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:30.879 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.879 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.138 spare_delay 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.138 [2024-10-08 16:24:24.207967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.138 [2024-10-08 16:24:24.208059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.138 [2024-10-08 16:24:24.208088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:31.138 [2024-10-08 16:24:24.208104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.138 [2024-10-08 16:24:24.211151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.138 [2024-10-08 16:24:24.211221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.138 spare 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.138 [2024-10-08 16:24:24.216081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.138 [2024-10-08 16:24:24.218568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.138 [2024-10-08 16:24:24.218694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.138 [2024-10-08 16:24:24.218776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:31.138 [2024-10-08 16:24:24.218890] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:31.138 [2024-10-08 16:24:24.218910] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:31.138 [2024-10-08 16:24:24.219322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:31.138 [2024-10-08 16:24:24.219558] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:31.138 [2024-10-08 16:24:24.219577] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:31.138 [2024-10-08 16:24:24.220086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.138 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.139 "name": "raid_bdev1", 00:16:31.139 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:31.139 "strip_size_kb": 0, 00:16:31.139 "state": "online", 00:16:31.139 "raid_level": "raid1", 00:16:31.139 "superblock": false, 00:16:31.139 "num_base_bdevs": 4, 00:16:31.139 "num_base_bdevs_discovered": 4, 00:16:31.139 "num_base_bdevs_operational": 4, 00:16:31.139 "base_bdevs_list": [ 00:16:31.139 { 00:16:31.139 "name": "BaseBdev1", 00:16:31.139 "uuid": "48fbc0ee-bf78-5d44-bb45-153eb2495a2d", 00:16:31.139 "is_configured": true, 00:16:31.139 "data_offset": 0, 00:16:31.139 "data_size": 65536 00:16:31.139 }, 00:16:31.139 { 00:16:31.139 "name": "BaseBdev2", 00:16:31.139 "uuid": "d574ca1b-ebcb-5a7e-91f0-5117bbfb53a5", 00:16:31.139 "is_configured": true, 00:16:31.139 "data_offset": 0, 00:16:31.139 "data_size": 65536 00:16:31.139 }, 00:16:31.139 { 00:16:31.139 "name": "BaseBdev3", 00:16:31.139 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:31.139 "is_configured": true, 00:16:31.139 "data_offset": 0, 00:16:31.139 "data_size": 65536 00:16:31.139 }, 00:16:31.139 { 00:16:31.139 "name": "BaseBdev4", 00:16:31.139 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:31.139 "is_configured": true, 00:16:31.139 "data_offset": 0, 00:16:31.139 "data_size": 65536 00:16:31.139 } 00:16:31.139 ] 00:16:31.139 }' 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.139 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.398 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.398 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.398 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:31.398 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.398 [2024-10-08 16:24:24.716877] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 [2024-10-08 16:24:24.824429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.658 "name": "raid_bdev1", 00:16:31.658 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:31.658 "strip_size_kb": 0, 00:16:31.658 "state": "online", 00:16:31.658 "raid_level": "raid1", 00:16:31.658 "superblock": false, 00:16:31.658 "num_base_bdevs": 4, 00:16:31.658 "num_base_bdevs_discovered": 3, 00:16:31.658 "num_base_bdevs_operational": 3, 00:16:31.658 "base_bdevs_list": [ 00:16:31.658 { 00:16:31.658 "name": null, 00:16:31.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.658 "is_configured": false, 00:16:31.658 "data_offset": 0, 00:16:31.658 "data_size": 65536 00:16:31.658 }, 00:16:31.658 { 00:16:31.658 "name": "BaseBdev2", 00:16:31.658 "uuid": "d574ca1b-ebcb-5a7e-91f0-5117bbfb53a5", 00:16:31.658 "is_configured": true, 00:16:31.658 "data_offset": 0, 00:16:31.658 "data_size": 65536 00:16:31.658 }, 00:16:31.658 { 00:16:31.658 "name": "BaseBdev3", 00:16:31.658 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:31.658 "is_configured": true, 00:16:31.658 "data_offset": 0, 00:16:31.658 "data_size": 65536 00:16:31.658 }, 00:16:31.658 { 00:16:31.658 "name": "BaseBdev4", 00:16:31.658 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:31.658 "is_configured": true, 00:16:31.658 "data_offset": 0, 00:16:31.658 "data_size": 65536 00:16:31.658 } 00:16:31.658 ] 00:16:31.658 }' 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.658 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 [2024-10-08 16:24:24.952754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:31.658 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:31.658 Zero copy mechanism will not be used. 00:16:31.658 Running I/O for 60 seconds... 00:16:32.225 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.225 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.225 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.225 [2024-10-08 16:24:25.349344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.225 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.225 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:32.225 [2024-10-08 16:24:25.417121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:32.225 [2024-10-08 16:24:25.420090] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.225 [2024-10-08 16:24:25.531342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:32.225 [2024-10-08 16:24:25.532250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:32.483 [2024-10-08 16:24:25.748069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:32.483 [2024-10-08 16:24:25.748467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:32.742 143.00 IOPS, 429.00 MiB/s [2024-10-08T16:24:26.064Z] [2024-10-08 16:24:26.019352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:33.001 [2024-10-08 16:24:26.250358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:33.001 [2024-10-08 16:24:26.251350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.260 "name": "raid_bdev1", 00:16:33.260 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:33.260 "strip_size_kb": 0, 00:16:33.260 "state": "online", 00:16:33.260 "raid_level": "raid1", 00:16:33.260 "superblock": false, 00:16:33.260 "num_base_bdevs": 4, 00:16:33.260 "num_base_bdevs_discovered": 4, 00:16:33.260 "num_base_bdevs_operational": 4, 00:16:33.260 "process": { 00:16:33.260 "type": "rebuild", 00:16:33.260 "target": "spare", 00:16:33.260 "progress": { 00:16:33.260 "blocks": 10240, 00:16:33.260 "percent": 15 00:16:33.260 } 00:16:33.260 }, 00:16:33.260 "base_bdevs_list": [ 00:16:33.260 { 00:16:33.260 "name": "spare", 00:16:33.260 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:33.260 "is_configured": true, 00:16:33.260 "data_offset": 0, 00:16:33.260 "data_size": 65536 00:16:33.260 }, 00:16:33.260 { 00:16:33.260 "name": "BaseBdev2", 00:16:33.260 "uuid": "d574ca1b-ebcb-5a7e-91f0-5117bbfb53a5", 00:16:33.260 "is_configured": true, 00:16:33.260 "data_offset": 0, 00:16:33.260 "data_size": 65536 00:16:33.260 }, 00:16:33.260 { 00:16:33.260 "name": "BaseBdev3", 00:16:33.260 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:33.260 "is_configured": true, 00:16:33.260 "data_offset": 0, 00:16:33.260 "data_size": 65536 00:16:33.260 }, 00:16:33.260 { 00:16:33.260 "name": "BaseBdev4", 00:16:33.260 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:33.260 "is_configured": true, 00:16:33.260 "data_offset": 0, 00:16:33.260 "data_size": 65536 00:16:33.260 } 00:16:33.260 ] 00:16:33.260 }' 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.260 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.260 [2024-10-08 16:24:26.580642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.518 [2024-10-08 16:24:26.593134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:33.518 [2024-10-08 16:24:26.595066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:33.518 [2024-10-08 16:24:26.705720] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.518 [2024-10-08 16:24:26.728296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.518 [2024-10-08 16:24:26.728386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.518 [2024-10-08 16:24:26.728434] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.518 [2024-10-08 16:24:26.768111] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.518 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.776 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.776 "name": "raid_bdev1", 00:16:33.776 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:33.776 "strip_size_kb": 0, 00:16:33.776 "state": "online", 00:16:33.776 "raid_level": "raid1", 00:16:33.776 "superblock": false, 00:16:33.776 "num_base_bdevs": 4, 00:16:33.776 "num_base_bdevs_discovered": 3, 00:16:33.776 "num_base_bdevs_operational": 3, 00:16:33.776 "base_bdevs_list": [ 00:16:33.776 { 00:16:33.776 "name": null, 00:16:33.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.776 "is_configured": false, 00:16:33.776 "data_offset": 0, 00:16:33.776 "data_size": 65536 00:16:33.776 }, 00:16:33.776 { 00:16:33.776 "name": "BaseBdev2", 00:16:33.776 "uuid": "d574ca1b-ebcb-5a7e-91f0-5117bbfb53a5", 00:16:33.776 "is_configured": true, 00:16:33.776 "data_offset": 0, 00:16:33.776 "data_size": 65536 00:16:33.776 }, 00:16:33.776 { 00:16:33.776 "name": "BaseBdev3", 00:16:33.776 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:33.776 "is_configured": true, 00:16:33.776 "data_offset": 0, 00:16:33.776 "data_size": 65536 00:16:33.776 }, 00:16:33.776 { 00:16:33.776 "name": "BaseBdev4", 00:16:33.776 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:33.776 "is_configured": true, 00:16:33.776 "data_offset": 0, 00:16:33.776 "data_size": 65536 00:16:33.776 } 00:16:33.776 ] 00:16:33.776 }' 00:16:33.776 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.776 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.035 113.50 IOPS, 340.50 MiB/s [2024-10-08T16:24:27.357Z] 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.035 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.035 "name": "raid_bdev1", 00:16:34.035 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:34.035 "strip_size_kb": 0, 00:16:34.035 "state": "online", 00:16:34.035 "raid_level": "raid1", 00:16:34.035 "superblock": false, 00:16:34.036 "num_base_bdevs": 4, 00:16:34.036 "num_base_bdevs_discovered": 3, 00:16:34.036 "num_base_bdevs_operational": 3, 00:16:34.036 "base_bdevs_list": [ 00:16:34.036 { 00:16:34.036 "name": null, 00:16:34.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.036 "is_configured": false, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 65536 00:16:34.036 }, 00:16:34.036 { 00:16:34.036 "name": "BaseBdev2", 00:16:34.036 "uuid": "d574ca1b-ebcb-5a7e-91f0-5117bbfb53a5", 00:16:34.036 "is_configured": true, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 65536 00:16:34.036 }, 00:16:34.036 { 00:16:34.036 "name": "BaseBdev3", 00:16:34.036 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:34.036 "is_configured": true, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 65536 00:16:34.036 }, 00:16:34.036 { 00:16:34.036 "name": "BaseBdev4", 00:16:34.036 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:34.036 "is_configured": true, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 65536 00:16:34.036 } 00:16:34.036 ] 00:16:34.036 }' 00:16:34.036 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.294 [2024-10-08 16:24:27.472631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.294 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:34.294 [2024-10-08 16:24:27.520618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:34.294 [2024-10-08 16:24:27.523255] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.553 [2024-10-08 16:24:27.654509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:34.553 [2024-10-08 16:24:27.655203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:34.811 [2024-10-08 16:24:27.880762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:34.811 [2024-10-08 16:24:27.881416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:35.069 132.00 IOPS, 396.00 MiB/s [2024-10-08T16:24:28.391Z] [2024-10-08 16:24:28.243071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:35.069 [2024-10-08 16:24:28.243979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:35.326 [2024-10-08 16:24:28.478589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.326 "name": "raid_bdev1", 00:16:35.326 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:35.326 "strip_size_kb": 0, 00:16:35.326 "state": "online", 00:16:35.326 "raid_level": "raid1", 00:16:35.326 "superblock": false, 00:16:35.326 "num_base_bdevs": 4, 00:16:35.326 "num_base_bdevs_discovered": 4, 00:16:35.326 "num_base_bdevs_operational": 4, 00:16:35.326 "process": { 00:16:35.326 "type": "rebuild", 00:16:35.326 "target": "spare", 00:16:35.326 "progress": { 00:16:35.326 "blocks": 10240, 00:16:35.326 "percent": 15 00:16:35.326 } 00:16:35.326 }, 00:16:35.326 "base_bdevs_list": [ 00:16:35.326 { 00:16:35.326 "name": "spare", 00:16:35.326 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:35.326 "is_configured": true, 00:16:35.326 "data_offset": 0, 00:16:35.326 "data_size": 65536 00:16:35.326 }, 00:16:35.326 { 00:16:35.326 "name": "BaseBdev2", 00:16:35.326 "uuid": "d574ca1b-ebcb-5a7e-91f0-5117bbfb53a5", 00:16:35.326 "is_configured": true, 00:16:35.326 "data_offset": 0, 00:16:35.326 "data_size": 65536 00:16:35.326 }, 00:16:35.326 { 00:16:35.326 "name": "BaseBdev3", 00:16:35.326 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:35.326 "is_configured": true, 00:16:35.326 "data_offset": 0, 00:16:35.326 "data_size": 65536 00:16:35.326 }, 00:16:35.326 { 00:16:35.326 "name": "BaseBdev4", 00:16:35.326 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:35.326 "is_configured": true, 00:16:35.326 "data_offset": 0, 00:16:35.326 "data_size": 65536 00:16:35.326 } 00:16:35.326 ] 00:16:35.326 }' 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.326 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.584 [2024-10-08 16:24:28.662297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:35.584 [2024-10-08 16:24:28.816820] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:35.584 [2024-10-08 16:24:28.817143] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.584 "name": "raid_bdev1", 00:16:35.584 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:35.584 "strip_size_kb": 0, 00:16:35.584 "state": "online", 00:16:35.584 "raid_level": "raid1", 00:16:35.584 "superblock": false, 00:16:35.584 "num_base_bdevs": 4, 00:16:35.584 "num_base_bdevs_discovered": 3, 00:16:35.584 "num_base_bdevs_operational": 3, 00:16:35.584 "process": { 00:16:35.584 "type": "rebuild", 00:16:35.584 "target": "spare", 00:16:35.584 "progress": { 00:16:35.584 "blocks": 12288, 00:16:35.584 "percent": 18 00:16:35.584 } 00:16:35.584 }, 00:16:35.584 "base_bdevs_list": [ 00:16:35.584 { 00:16:35.584 "name": "spare", 00:16:35.584 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:35.584 "is_configured": true, 00:16:35.584 "data_offset": 0, 00:16:35.584 "data_size": 65536 00:16:35.584 }, 00:16:35.584 { 00:16:35.584 "name": null, 00:16:35.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.584 "is_configured": false, 00:16:35.584 "data_offset": 0, 00:16:35.584 "data_size": 65536 00:16:35.584 }, 00:16:35.584 { 00:16:35.584 "name": "BaseBdev3", 00:16:35.584 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:35.584 "is_configured": true, 00:16:35.584 "data_offset": 0, 00:16:35.584 "data_size": 65536 00:16:35.584 }, 00:16:35.584 { 00:16:35.584 "name": "BaseBdev4", 00:16:35.584 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:35.584 "is_configured": true, 00:16:35.584 "data_offset": 0, 00:16:35.584 "data_size": 65536 00:16:35.584 } 00:16:35.584 ] 00:16:35.584 }' 00:16:35.584 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.843 116.75 IOPS, 350.25 MiB/s [2024-10-08T16:24:29.165Z] [2024-10-08 16:24:28.974752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.843 16:24:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.843 "name": "raid_bdev1", 00:16:35.843 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:35.843 "strip_size_kb": 0, 00:16:35.843 "state": "online", 00:16:35.843 "raid_level": "raid1", 00:16:35.843 "superblock": false, 00:16:35.843 "num_base_bdevs": 4, 00:16:35.843 "num_base_bdevs_discovered": 3, 00:16:35.843 "num_base_bdevs_operational": 3, 00:16:35.843 "process": { 00:16:35.843 "type": "rebuild", 00:16:35.843 "target": "spare", 00:16:35.843 "progress": { 00:16:35.843 "blocks": 14336, 00:16:35.843 "percent": 21 00:16:35.843 } 00:16:35.843 }, 00:16:35.843 "base_bdevs_list": [ 00:16:35.843 { 00:16:35.843 "name": "spare", 00:16:35.843 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:35.843 "is_configured": true, 00:16:35.843 "data_offset": 0, 00:16:35.843 "data_size": 65536 00:16:35.843 }, 00:16:35.843 { 00:16:35.843 "name": null, 00:16:35.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.843 "is_configured": false, 00:16:35.843 "data_offset": 0, 00:16:35.843 "data_size": 65536 00:16:35.843 }, 00:16:35.843 { 00:16:35.843 "name": "BaseBdev3", 00:16:35.843 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:35.843 "is_configured": true, 00:16:35.843 "data_offset": 0, 00:16:35.843 "data_size": 65536 00:16:35.843 }, 00:16:35.843 { 00:16:35.843 "name": "BaseBdev4", 00:16:35.843 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:35.843 "is_configured": true, 00:16:35.843 "data_offset": 0, 00:16:35.843 "data_size": 65536 00:16:35.843 } 00:16:35.843 ] 00:16:35.843 }' 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.843 16:24:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.101 [2024-10-08 16:24:29.207257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:36.359 [2024-10-08 16:24:29.534034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:36.359 [2024-10-08 16:24:29.645377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:36.926 104.80 IOPS, 314.40 MiB/s [2024-10-08T16:24:30.248Z] 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.926 "name": "raid_bdev1", 00:16:36.926 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:36.926 "strip_size_kb": 0, 00:16:36.926 "state": "online", 00:16:36.926 "raid_level": "raid1", 00:16:36.926 "superblock": false, 00:16:36.926 "num_base_bdevs": 4, 00:16:36.926 "num_base_bdevs_discovered": 3, 00:16:36.926 "num_base_bdevs_operational": 3, 00:16:36.926 "process": { 00:16:36.926 "type": "rebuild", 00:16:36.926 "target": "spare", 00:16:36.926 "progress": { 00:16:36.926 "blocks": 30720, 00:16:36.926 "percent": 46 00:16:36.926 } 00:16:36.926 }, 00:16:36.926 "base_bdevs_list": [ 00:16:36.926 { 00:16:36.926 "name": "spare", 00:16:36.926 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:36.926 "is_configured": true, 00:16:36.926 "data_offset": 0, 00:16:36.926 "data_size": 65536 00:16:36.926 }, 00:16:36.926 { 00:16:36.926 "name": null, 00:16:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.926 "is_configured": false, 00:16:36.926 "data_offset": 0, 00:16:36.926 "data_size": 65536 00:16:36.926 }, 00:16:36.926 { 00:16:36.926 "name": "BaseBdev3", 00:16:36.926 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:36.926 "is_configured": true, 00:16:36.926 "data_offset": 0, 00:16:36.926 "data_size": 65536 00:16:36.926 }, 00:16:36.926 { 00:16:36.926 "name": "BaseBdev4", 00:16:36.926 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:36.926 "is_configured": true, 00:16:36.926 "data_offset": 0, 00:16:36.926 "data_size": 65536 00:16:36.926 } 00:16:36.926 ] 00:16:36.926 }' 00:16:36.926 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.184 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.184 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.184 [2024-10-08 16:24:30.325905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:37.184 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.184 16:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.751 93.17 IOPS, 279.50 MiB/s [2024-10-08T16:24:31.073Z] [2024-10-08 16:24:30.999230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.037 16:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.295 16:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.295 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.295 "name": "raid_bdev1", 00:16:38.295 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:38.295 "strip_size_kb": 0, 00:16:38.295 "state": "online", 00:16:38.295 "raid_level": "raid1", 00:16:38.295 "superblock": false, 00:16:38.296 "num_base_bdevs": 4, 00:16:38.296 "num_base_bdevs_discovered": 3, 00:16:38.296 "num_base_bdevs_operational": 3, 00:16:38.296 "process": { 00:16:38.296 "type": "rebuild", 00:16:38.296 "target": "spare", 00:16:38.296 "progress": { 00:16:38.296 "blocks": 49152, 00:16:38.296 "percent": 75 00:16:38.296 } 00:16:38.296 }, 00:16:38.296 "base_bdevs_list": [ 00:16:38.296 { 00:16:38.296 "name": "spare", 00:16:38.296 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:38.296 "is_configured": true, 00:16:38.296 "data_offset": 0, 00:16:38.296 "data_size": 65536 00:16:38.296 }, 00:16:38.296 { 00:16:38.296 "name": null, 00:16:38.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.296 "is_configured": false, 00:16:38.296 "data_offset": 0, 00:16:38.296 "data_size": 65536 00:16:38.296 }, 00:16:38.296 { 00:16:38.296 "name": "BaseBdev3", 00:16:38.296 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:38.296 "is_configured": true, 00:16:38.296 "data_offset": 0, 00:16:38.296 "data_size": 65536 00:16:38.296 }, 00:16:38.296 { 00:16:38.296 "name": "BaseBdev4", 00:16:38.296 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:38.296 "is_configured": true, 00:16:38.296 "data_offset": 0, 00:16:38.296 "data_size": 65536 00:16:38.296 } 00:16:38.296 ] 00:16:38.296 }' 00:16:38.296 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.296 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.296 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.296 [2024-10-08 16:24:31.461877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:38.296 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.296 16:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.553 [2024-10-08 16:24:31.702298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:39.068 85.71 IOPS, 257.14 MiB/s [2024-10-08T16:24:32.390Z] [2024-10-08 16:24:32.267069] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:39.068 [2024-10-08 16:24:32.375002] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:39.068 [2024-10-08 16:24:32.378560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.326 "name": "raid_bdev1", 00:16:39.326 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:39.326 "strip_size_kb": 0, 00:16:39.326 "state": "online", 00:16:39.326 "raid_level": "raid1", 00:16:39.326 "superblock": false, 00:16:39.326 "num_base_bdevs": 4, 00:16:39.326 "num_base_bdevs_discovered": 3, 00:16:39.326 "num_base_bdevs_operational": 3, 00:16:39.326 "base_bdevs_list": [ 00:16:39.326 { 00:16:39.326 "name": "spare", 00:16:39.326 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:39.326 "is_configured": true, 00:16:39.326 "data_offset": 0, 00:16:39.326 "data_size": 65536 00:16:39.326 }, 00:16:39.326 { 00:16:39.326 "name": null, 00:16:39.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.326 "is_configured": false, 00:16:39.326 "data_offset": 0, 00:16:39.326 "data_size": 65536 00:16:39.326 }, 00:16:39.326 { 00:16:39.326 "name": "BaseBdev3", 00:16:39.326 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:39.326 "is_configured": true, 00:16:39.326 "data_offset": 0, 00:16:39.326 "data_size": 65536 00:16:39.326 }, 00:16:39.326 { 00:16:39.326 "name": "BaseBdev4", 00:16:39.326 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:39.326 "is_configured": true, 00:16:39.326 "data_offset": 0, 00:16:39.326 "data_size": 65536 00:16:39.326 } 00:16:39.326 ] 00:16:39.326 }' 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:39.326 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.584 "name": "raid_bdev1", 00:16:39.584 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:39.584 "strip_size_kb": 0, 00:16:39.584 "state": "online", 00:16:39.584 "raid_level": "raid1", 00:16:39.584 "superblock": false, 00:16:39.584 "num_base_bdevs": 4, 00:16:39.584 "num_base_bdevs_discovered": 3, 00:16:39.584 "num_base_bdevs_operational": 3, 00:16:39.584 "base_bdevs_list": [ 00:16:39.584 { 00:16:39.584 "name": "spare", 00:16:39.584 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:39.584 "is_configured": true, 00:16:39.584 "data_offset": 0, 00:16:39.584 "data_size": 65536 00:16:39.584 }, 00:16:39.584 { 00:16:39.584 "name": null, 00:16:39.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.584 "is_configured": false, 00:16:39.584 "data_offset": 0, 00:16:39.584 "data_size": 65536 00:16:39.584 }, 00:16:39.584 { 00:16:39.584 "name": "BaseBdev3", 00:16:39.584 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:39.584 "is_configured": true, 00:16:39.584 "data_offset": 0, 00:16:39.584 "data_size": 65536 00:16:39.584 }, 00:16:39.584 { 00:16:39.584 "name": "BaseBdev4", 00:16:39.584 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:39.584 "is_configured": true, 00:16:39.584 "data_offset": 0, 00:16:39.584 "data_size": 65536 00:16:39.584 } 00:16:39.584 ] 00:16:39.584 }' 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.584 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.585 "name": "raid_bdev1", 00:16:39.585 "uuid": "6df03776-7b34-4751-8215-2379e28df937", 00:16:39.585 "strip_size_kb": 0, 00:16:39.585 "state": "online", 00:16:39.585 "raid_level": "raid1", 00:16:39.585 "superblock": false, 00:16:39.585 "num_base_bdevs": 4, 00:16:39.585 "num_base_bdevs_discovered": 3, 00:16:39.585 "num_base_bdevs_operational": 3, 00:16:39.585 "base_bdevs_list": [ 00:16:39.585 { 00:16:39.585 "name": "spare", 00:16:39.585 "uuid": "aae67555-1bc7-5d3a-90e0-5050f5805451", 00:16:39.585 "is_configured": true, 00:16:39.585 "data_offset": 0, 00:16:39.585 "data_size": 65536 00:16:39.585 }, 00:16:39.585 { 00:16:39.585 "name": null, 00:16:39.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.585 "is_configured": false, 00:16:39.585 "data_offset": 0, 00:16:39.585 "data_size": 65536 00:16:39.585 }, 00:16:39.585 { 00:16:39.585 "name": "BaseBdev3", 00:16:39.585 "uuid": "03c52420-261f-5603-a6d6-cadfbf84d3cb", 00:16:39.585 "is_configured": true, 00:16:39.585 "data_offset": 0, 00:16:39.585 "data_size": 65536 00:16:39.585 }, 00:16:39.585 { 00:16:39.585 "name": "BaseBdev4", 00:16:39.585 "uuid": "42a38299-654f-5a69-b78b-e13c307bb49f", 00:16:39.585 "is_configured": true, 00:16:39.585 "data_offset": 0, 00:16:39.585 "data_size": 65536 00:16:39.585 } 00:16:39.585 ] 00:16:39.585 }' 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.585 16:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.101 79.50 IOPS, 238.50 MiB/s [2024-10-08T16:24:33.423Z] 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.101 [2024-10-08 16:24:33.359918] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.101 [2024-10-08 16:24:33.359984] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.101 00:16:40.101 Latency(us) 00:16:40.101 [2024-10-08T16:24:33.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.101 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:40.101 raid_bdev1 : 8.42 76.72 230.16 0.00 0.00 17309.58 292.31 120586.24 00:16:40.101 [2024-10-08T16:24:33.423Z] =================================================================================================================== 00:16:40.101 [2024-10-08T16:24:33.423Z] Total : 76.72 230.16 0.00 0.00 17309.58 292.31 120586.24 00:16:40.101 { 00:16:40.101 "results": [ 00:16:40.101 { 00:16:40.101 "job": "raid_bdev1", 00:16:40.101 "core_mask": "0x1", 00:16:40.101 "workload": "randrw", 00:16:40.101 "percentage": 50, 00:16:40.101 "status": "finished", 00:16:40.101 "queue_depth": 2, 00:16:40.101 "io_size": 3145728, 00:16:40.101 "runtime": 8.420105, 00:16:40.101 "iops": 76.72113352505698, 00:16:40.101 "mibps": 230.16340057517095, 00:16:40.101 "io_failed": 0, 00:16:40.101 "io_timeout": 0, 00:16:40.101 "avg_latency_us": 17309.577258654655, 00:16:40.101 "min_latency_us": 292.30545454545455, 00:16:40.101 "max_latency_us": 120586.24 00:16:40.101 } 00:16:40.101 ], 00:16:40.101 "core_count": 1 00:16:40.101 } 00:16:40.101 [2024-10-08 16:24:33.395788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.101 [2024-10-08 16:24:33.395874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.101 [2024-10-08 16:24:33.396034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.101 [2024-10-08 16:24:33.396059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.101 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.359 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:40.617 /dev/nbd0 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.617 1+0 records in 00:16:40.617 1+0 records out 00:16:40.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402847 s, 10.2 MB/s 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.617 16:24:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:40.876 /dev/nbd1 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.876 1+0 records in 00:16:40.876 1+0 records out 00:16:40.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501437 s, 8.2 MB/s 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.876 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.173 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.432 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:41.692 /dev/nbd1 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.692 1+0 records in 00:16:41.692 1+0 records out 00:16:41.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430373 s, 9.5 MB/s 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.692 16:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.951 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.209 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79423 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 79423 ']' 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 79423 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79423 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.468 killing process with pid 79423 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79423' 00:16:42.468 Received shutdown signal, test time was about 10.743549 seconds 00:16:42.468 00:16:42.468 Latency(us) 00:16:42.468 [2024-10-08T16:24:35.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.468 [2024-10-08T16:24:35.790Z] =================================================================================================================== 00:16:42.468 [2024-10-08T16:24:35.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 79423 00:16:42.468 [2024-10-08 16:24:35.699215] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.468 16:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 79423 00:16:43.034 [2024-10-08 16:24:36.068092] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:44.445 00:16:44.445 real 0m14.563s 00:16:44.445 user 0m18.954s 00:16:44.445 sys 0m1.991s 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 ************************************ 00:16:44.445 END TEST raid_rebuild_test_io 00:16:44.445 ************************************ 00:16:44.445 16:24:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:44.445 16:24:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:44.445 16:24:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.445 16:24:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 ************************************ 00:16:44.445 START TEST raid_rebuild_test_sb_io 00:16:44.445 ************************************ 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:44.445 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79838 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79838 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79838 ']' 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.446 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.446 Zero copy mechanism will not be used. 00:16:44.446 [2024-10-08 16:24:37.576182] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:16:44.446 [2024-10-08 16:24:37.576368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79838 ] 00:16:44.446 [2024-10-08 16:24:37.744258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.705 [2024-10-08 16:24:37.987443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.964 [2024-10-08 16:24:38.187639] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.964 [2024-10-08 16:24:38.187718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.221 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.221 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:16:45.221 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.221 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.221 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.221 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.478 BaseBdev1_malloc 00:16:45.478 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.478 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.478 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.478 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.478 [2024-10-08 16:24:38.574508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.478 [2024-10-08 16:24:38.574621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.478 [2024-10-08 16:24:38.574656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.479 [2024-10-08 16:24:38.574679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.479 [2024-10-08 16:24:38.577444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.479 [2024-10-08 16:24:38.577513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.479 BaseBdev1 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 BaseBdev2_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 [2024-10-08 16:24:38.640132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.479 [2024-10-08 16:24:38.640226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.479 [2024-10-08 16:24:38.640255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.479 [2024-10-08 16:24:38.640274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.479 [2024-10-08 16:24:38.643162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.479 [2024-10-08 16:24:38.643229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.479 BaseBdev2 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 BaseBdev3_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 [2024-10-08 16:24:38.696088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.479 [2024-10-08 16:24:38.696184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.479 [2024-10-08 16:24:38.696218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.479 [2024-10-08 16:24:38.696238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.479 [2024-10-08 16:24:38.699142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.479 [2024-10-08 16:24:38.699196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.479 BaseBdev3 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 BaseBdev4_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.479 [2024-10-08 16:24:38.754283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:45.479 [2024-10-08 16:24:38.754366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.479 [2024-10-08 16:24:38.754395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:45.479 [2024-10-08 16:24:38.754413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.479 [2024-10-08 16:24:38.757245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.479 [2024-10-08 16:24:38.757296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.479 BaseBdev4 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.479 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 spare_malloc 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 spare_delay 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 [2024-10-08 16:24:38.823070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.738 [2024-10-08 16:24:38.823152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.738 [2024-10-08 16:24:38.823187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.738 [2024-10-08 16:24:38.823205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.738 [2024-10-08 16:24:38.826157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.738 [2024-10-08 16:24:38.826206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.738 spare 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 [2024-10-08 16:24:38.835170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.738 [2024-10-08 16:24:38.837762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.738 [2024-10-08 16:24:38.837864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.738 [2024-10-08 16:24:38.837956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.738 [2024-10-08 16:24:38.838224] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.738 [2024-10-08 16:24:38.838255] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.738 [2024-10-08 16:24:38.838644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.738 [2024-10-08 16:24:38.838894] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.738 [2024-10-08 16:24:38.838919] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.738 [2024-10-08 16:24:38.839196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.738 "name": "raid_bdev1", 00:16:45.738 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:45.738 "strip_size_kb": 0, 00:16:45.738 "state": "online", 00:16:45.738 "raid_level": "raid1", 00:16:45.738 "superblock": true, 00:16:45.738 "num_base_bdevs": 4, 00:16:45.738 "num_base_bdevs_discovered": 4, 00:16:45.738 "num_base_bdevs_operational": 4, 00:16:45.738 "base_bdevs_list": [ 00:16:45.738 { 00:16:45.738 "name": "BaseBdev1", 00:16:45.738 "uuid": "8dd82ea5-aa57-5f04-abb0-63b361d29821", 00:16:45.738 "is_configured": true, 00:16:45.738 "data_offset": 2048, 00:16:45.738 "data_size": 63488 00:16:45.738 }, 00:16:45.738 { 00:16:45.738 "name": "BaseBdev2", 00:16:45.738 "uuid": "ef63ed36-f24e-501d-9a2e-8c2f6819a732", 00:16:45.738 "is_configured": true, 00:16:45.738 "data_offset": 2048, 00:16:45.738 "data_size": 63488 00:16:45.738 }, 00:16:45.738 { 00:16:45.738 "name": "BaseBdev3", 00:16:45.738 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:45.738 "is_configured": true, 00:16:45.738 "data_offset": 2048, 00:16:45.738 "data_size": 63488 00:16:45.738 }, 00:16:45.738 { 00:16:45.738 "name": "BaseBdev4", 00:16:45.738 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:45.738 "is_configured": true, 00:16:45.738 "data_offset": 2048, 00:16:45.738 "data_size": 63488 00:16:45.738 } 00:16:45.738 ] 00:16:45.738 }' 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.738 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.344 [2024-10-08 16:24:39.391719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.344 [2024-10-08 16:24:39.507284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.344 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.345 "name": "raid_bdev1", 00:16:46.345 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:46.345 "strip_size_kb": 0, 00:16:46.345 "state": "online", 00:16:46.345 "raid_level": "raid1", 00:16:46.345 "superblock": true, 00:16:46.345 "num_base_bdevs": 4, 00:16:46.345 "num_base_bdevs_discovered": 3, 00:16:46.345 "num_base_bdevs_operational": 3, 00:16:46.345 "base_bdevs_list": [ 00:16:46.345 { 00:16:46.345 "name": null, 00:16:46.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.345 "is_configured": false, 00:16:46.345 "data_offset": 0, 00:16:46.345 "data_size": 63488 00:16:46.345 }, 00:16:46.345 { 00:16:46.345 "name": "BaseBdev2", 00:16:46.345 "uuid": "ef63ed36-f24e-501d-9a2e-8c2f6819a732", 00:16:46.345 "is_configured": true, 00:16:46.345 "data_offset": 2048, 00:16:46.345 "data_size": 63488 00:16:46.345 }, 00:16:46.345 { 00:16:46.345 "name": "BaseBdev3", 00:16:46.345 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:46.345 "is_configured": true, 00:16:46.345 "data_offset": 2048, 00:16:46.345 "data_size": 63488 00:16:46.345 }, 00:16:46.345 { 00:16:46.345 "name": "BaseBdev4", 00:16:46.345 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:46.345 "is_configured": true, 00:16:46.345 "data_offset": 2048, 00:16:46.345 "data_size": 63488 00:16:46.345 } 00:16:46.345 ] 00:16:46.345 }' 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.345 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.345 [2024-10-08 16:24:39.635650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:46.345 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.345 Zero copy mechanism will not be used. 00:16:46.345 Running I/O for 60 seconds... 00:16:46.911 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.911 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.911 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.911 [2024-10-08 16:24:40.054114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.911 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.911 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:46.911 [2024-10-08 16:24:40.157452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:46.911 [2024-10-08 16:24:40.160150] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.169 [2024-10-08 16:24:40.271674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:47.169 [2024-10-08 16:24:40.272438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:47.427 [2024-10-08 16:24:40.505907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:47.427 [2024-10-08 16:24:40.506910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:47.684 116.00 IOPS, 348.00 MiB/s [2024-10-08T16:24:41.006Z] [2024-10-08 16:24:40.883946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:47.942 [2024-10-08 16:24:41.020282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.942 "name": "raid_bdev1", 00:16:47.942 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:47.942 "strip_size_kb": 0, 00:16:47.942 "state": "online", 00:16:47.942 "raid_level": "raid1", 00:16:47.942 "superblock": true, 00:16:47.942 "num_base_bdevs": 4, 00:16:47.942 "num_base_bdevs_discovered": 4, 00:16:47.942 "num_base_bdevs_operational": 4, 00:16:47.942 "process": { 00:16:47.942 "type": "rebuild", 00:16:47.942 "target": "spare", 00:16:47.942 "progress": { 00:16:47.942 "blocks": 10240, 00:16:47.942 "percent": 16 00:16:47.942 } 00:16:47.942 }, 00:16:47.942 "base_bdevs_list": [ 00:16:47.942 { 00:16:47.942 "name": "spare", 00:16:47.942 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:47.942 "is_configured": true, 00:16:47.942 "data_offset": 2048, 00:16:47.942 "data_size": 63488 00:16:47.942 }, 00:16:47.942 { 00:16:47.942 "name": "BaseBdev2", 00:16:47.942 "uuid": "ef63ed36-f24e-501d-9a2e-8c2f6819a732", 00:16:47.942 "is_configured": true, 00:16:47.942 "data_offset": 2048, 00:16:47.942 "data_size": 63488 00:16:47.942 }, 00:16:47.942 { 00:16:47.942 "name": "BaseBdev3", 00:16:47.942 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:47.942 "is_configured": true, 00:16:47.942 "data_offset": 2048, 00:16:47.942 "data_size": 63488 00:16:47.942 }, 00:16:47.942 { 00:16:47.942 "name": "BaseBdev4", 00:16:47.942 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:47.942 "is_configured": true, 00:16:47.942 "data_offset": 2048, 00:16:47.942 "data_size": 63488 00:16:47.942 } 00:16:47.942 ] 00:16:47.942 }' 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.942 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.200 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.200 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:48.200 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.200 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.200 [2024-10-08 16:24:41.281485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.200 [2024-10-08 16:24:41.361917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:48.200 [2024-10-08 16:24:41.465038] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.200 [2024-10-08 16:24:41.478867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.200 [2024-10-08 16:24:41.478964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.200 [2024-10-08 16:24:41.478983] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.200 [2024-10-08 16:24:41.520399] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.458 "name": "raid_bdev1", 00:16:48.458 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:48.458 "strip_size_kb": 0, 00:16:48.458 "state": "online", 00:16:48.458 "raid_level": "raid1", 00:16:48.458 "superblock": true, 00:16:48.458 "num_base_bdevs": 4, 00:16:48.458 "num_base_bdevs_discovered": 3, 00:16:48.458 "num_base_bdevs_operational": 3, 00:16:48.458 "base_bdevs_list": [ 00:16:48.458 { 00:16:48.458 "name": null, 00:16:48.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.458 "is_configured": false, 00:16:48.458 "data_offset": 0, 00:16:48.458 "data_size": 63488 00:16:48.458 }, 00:16:48.458 { 00:16:48.458 "name": "BaseBdev2", 00:16:48.458 "uuid": "ef63ed36-f24e-501d-9a2e-8c2f6819a732", 00:16:48.458 "is_configured": true, 00:16:48.458 "data_offset": 2048, 00:16:48.458 "data_size": 63488 00:16:48.458 }, 00:16:48.458 { 00:16:48.458 "name": "BaseBdev3", 00:16:48.458 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:48.458 "is_configured": true, 00:16:48.458 "data_offset": 2048, 00:16:48.458 "data_size": 63488 00:16:48.458 }, 00:16:48.458 { 00:16:48.458 "name": "BaseBdev4", 00:16:48.458 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:48.458 "is_configured": true, 00:16:48.458 "data_offset": 2048, 00:16:48.458 "data_size": 63488 00:16:48.458 } 00:16:48.458 ] 00:16:48.458 }' 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.458 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.025 112.00 IOPS, 336.00 MiB/s [2024-10-08T16:24:42.347Z] 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.025 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.026 "name": "raid_bdev1", 00:16:49.026 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:49.026 "strip_size_kb": 0, 00:16:49.026 "state": "online", 00:16:49.026 "raid_level": "raid1", 00:16:49.026 "superblock": true, 00:16:49.026 "num_base_bdevs": 4, 00:16:49.026 "num_base_bdevs_discovered": 3, 00:16:49.026 "num_base_bdevs_operational": 3, 00:16:49.026 "base_bdevs_list": [ 00:16:49.026 { 00:16:49.026 "name": null, 00:16:49.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.026 "is_configured": false, 00:16:49.026 "data_offset": 0, 00:16:49.026 "data_size": 63488 00:16:49.026 }, 00:16:49.026 { 00:16:49.026 "name": "BaseBdev2", 00:16:49.026 "uuid": "ef63ed36-f24e-501d-9a2e-8c2f6819a732", 00:16:49.026 "is_configured": true, 00:16:49.026 "data_offset": 2048, 00:16:49.026 "data_size": 63488 00:16:49.026 }, 00:16:49.026 { 00:16:49.026 "name": "BaseBdev3", 00:16:49.026 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:49.026 "is_configured": true, 00:16:49.026 "data_offset": 2048, 00:16:49.026 "data_size": 63488 00:16:49.026 }, 00:16:49.026 { 00:16:49.026 "name": "BaseBdev4", 00:16:49.026 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:49.026 "is_configured": true, 00:16:49.026 "data_offset": 2048, 00:16:49.026 "data_size": 63488 00:16:49.026 } 00:16:49.026 ] 00:16:49.026 }' 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.026 [2024-10-08 16:24:42.231164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.026 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:49.026 [2024-10-08 16:24:42.319841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:49.026 [2024-10-08 16:24:42.322625] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.284 [2024-10-08 16:24:42.431346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:49.284 [2024-10-08 16:24:42.433249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:49.543 122.00 IOPS, 366.00 MiB/s [2024-10-08T16:24:42.865Z] [2024-10-08 16:24:42.657321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.543 [2024-10-08 16:24:42.657751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.801 [2024-10-08 16:24:42.948224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:50.059 [2024-10-08 16:24:43.182641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.059 "name": "raid_bdev1", 00:16:50.059 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:50.059 "strip_size_kb": 0, 00:16:50.059 "state": "online", 00:16:50.059 "raid_level": "raid1", 00:16:50.059 "superblock": true, 00:16:50.059 "num_base_bdevs": 4, 00:16:50.059 "num_base_bdevs_discovered": 4, 00:16:50.059 "num_base_bdevs_operational": 4, 00:16:50.059 "process": { 00:16:50.059 "type": "rebuild", 00:16:50.059 "target": "spare", 00:16:50.059 "progress": { 00:16:50.059 "blocks": 10240, 00:16:50.059 "percent": 16 00:16:50.059 } 00:16:50.059 }, 00:16:50.059 "base_bdevs_list": [ 00:16:50.059 { 00:16:50.059 "name": "spare", 00:16:50.059 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:50.059 "is_configured": true, 00:16:50.059 "data_offset": 2048, 00:16:50.059 "data_size": 63488 00:16:50.059 }, 00:16:50.059 { 00:16:50.059 "name": "BaseBdev2", 00:16:50.059 "uuid": "ef63ed36-f24e-501d-9a2e-8c2f6819a732", 00:16:50.059 "is_configured": true, 00:16:50.059 "data_offset": 2048, 00:16:50.059 "data_size": 63488 00:16:50.059 }, 00:16:50.059 { 00:16:50.059 "name": "BaseBdev3", 00:16:50.059 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:50.059 "is_configured": true, 00:16:50.059 "data_offset": 2048, 00:16:50.059 "data_size": 63488 00:16:50.059 }, 00:16:50.059 { 00:16:50.059 "name": "BaseBdev4", 00:16:50.059 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:50.059 "is_configured": true, 00:16:50.059 "data_offset": 2048, 00:16:50.059 "data_size": 63488 00:16:50.059 } 00:16:50.059 ] 00:16:50.059 }' 00:16:50.059 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:50.317 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.317 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.317 [2024-10-08 16:24:43.451759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.317 [2024-10-08 16:24:43.542288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:50.575 108.25 IOPS, 324.75 MiB/s [2024-10-08T16:24:43.897Z] [2024-10-08 16:24:43.742450] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:50.575 [2024-10-08 16:24:43.742774] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.575 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.575 "name": "raid_bdev1", 00:16:50.575 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:50.575 "strip_size_kb": 0, 00:16:50.575 "state": "online", 00:16:50.575 "raid_level": "raid1", 00:16:50.575 "superblock": true, 00:16:50.575 "num_base_bdevs": 4, 00:16:50.575 "num_base_bdevs_discovered": 3, 00:16:50.575 "num_base_bdevs_operational": 3, 00:16:50.575 "process": { 00:16:50.575 "type": "rebuild", 00:16:50.575 "target": "spare", 00:16:50.575 "progress": { 00:16:50.575 "blocks": 14336, 00:16:50.575 "percent": 22 00:16:50.575 } 00:16:50.575 }, 00:16:50.575 "base_bdevs_list": [ 00:16:50.575 { 00:16:50.575 "name": "spare", 00:16:50.575 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:50.575 "is_configured": true, 00:16:50.575 "data_offset": 2048, 00:16:50.575 "data_size": 63488 00:16:50.575 }, 00:16:50.575 { 00:16:50.575 "name": null, 00:16:50.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.576 "is_configured": false, 00:16:50.576 "data_offset": 0, 00:16:50.576 "data_size": 63488 00:16:50.576 }, 00:16:50.576 { 00:16:50.576 "name": "BaseBdev3", 00:16:50.576 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:50.576 "is_configured": true, 00:16:50.576 "data_offset": 2048, 00:16:50.576 "data_size": 63488 00:16:50.576 }, 00:16:50.576 { 00:16:50.576 "name": "BaseBdev4", 00:16:50.576 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:50.576 "is_configured": true, 00:16:50.576 "data_offset": 2048, 00:16:50.576 "data_size": 63488 00:16:50.576 } 00:16:50.576 ] 00:16:50.576 }' 00:16:50.576 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.576 [2024-10-08 16:24:43.857917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:50.576 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.576 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=553 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.878 "name": "raid_bdev1", 00:16:50.878 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:50.878 "strip_size_kb": 0, 00:16:50.878 "state": "online", 00:16:50.878 "raid_level": "raid1", 00:16:50.878 "superblock": true, 00:16:50.878 "num_base_bdevs": 4, 00:16:50.878 "num_base_bdevs_discovered": 3, 00:16:50.878 "num_base_bdevs_operational": 3, 00:16:50.878 "process": { 00:16:50.878 "type": "rebuild", 00:16:50.878 "target": "spare", 00:16:50.878 "progress": { 00:16:50.878 "blocks": 16384, 00:16:50.878 "percent": 25 00:16:50.878 } 00:16:50.878 }, 00:16:50.878 "base_bdevs_list": [ 00:16:50.878 { 00:16:50.878 "name": "spare", 00:16:50.878 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:50.878 "is_configured": true, 00:16:50.878 "data_offset": 2048, 00:16:50.878 "data_size": 63488 00:16:50.878 }, 00:16:50.878 { 00:16:50.878 "name": null, 00:16:50.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.878 "is_configured": false, 00:16:50.878 "data_offset": 0, 00:16:50.878 "data_size": 63488 00:16:50.878 }, 00:16:50.878 { 00:16:50.878 "name": "BaseBdev3", 00:16:50.878 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:50.878 "is_configured": true, 00:16:50.878 "data_offset": 2048, 00:16:50.878 "data_size": 63488 00:16:50.878 }, 00:16:50.878 { 00:16:50.878 "name": "BaseBdev4", 00:16:50.878 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:50.878 "is_configured": true, 00:16:50.878 "data_offset": 2048, 00:16:50.878 "data_size": 63488 00:16:50.878 } 00:16:50.878 ] 00:16:50.878 }' 00:16:50.878 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.878 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.878 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.878 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.878 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.009 97.40 IOPS, 292.20 MiB/s [2024-10-08T16:24:45.331Z] 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.009 "name": "raid_bdev1", 00:16:52.009 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:52.009 "strip_size_kb": 0, 00:16:52.009 "state": "online", 00:16:52.009 "raid_level": "raid1", 00:16:52.009 "superblock": true, 00:16:52.009 "num_base_bdevs": 4, 00:16:52.009 "num_base_bdevs_discovered": 3, 00:16:52.009 "num_base_bdevs_operational": 3, 00:16:52.009 "process": { 00:16:52.009 "type": "rebuild", 00:16:52.009 "target": "spare", 00:16:52.009 "progress": { 00:16:52.009 "blocks": 36864, 00:16:52.009 "percent": 58 00:16:52.009 } 00:16:52.009 }, 00:16:52.009 "base_bdevs_list": [ 00:16:52.009 { 00:16:52.009 "name": "spare", 00:16:52.009 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:52.009 "is_configured": true, 00:16:52.009 "data_offset": 2048, 00:16:52.009 "data_size": 63488 00:16:52.009 }, 00:16:52.009 { 00:16:52.009 "name": null, 00:16:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.009 "is_configured": false, 00:16:52.009 "data_offset": 0, 00:16:52.009 "data_size": 63488 00:16:52.009 }, 00:16:52.009 { 00:16:52.009 "name": "BaseBdev3", 00:16:52.009 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:52.009 "is_configured": true, 00:16:52.009 "data_offset": 2048, 00:16:52.009 "data_size": 63488 00:16:52.009 }, 00:16:52.009 { 00:16:52.009 "name": "BaseBdev4", 00:16:52.009 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:52.009 "is_configured": true, 00:16:52.009 "data_offset": 2048, 00:16:52.009 "data_size": 63488 00:16:52.009 } 00:16:52.009 ] 00:16:52.009 }' 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.009 [2024-10-08 16:24:45.149023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.009 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.268 [2024-10-08 16:24:45.381489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:52.268 [2024-10-08 16:24:45.382179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:52.784 89.33 IOPS, 268.00 MiB/s [2024-10-08T16:24:46.106Z] [2024-10-08 16:24:46.095212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:53.093 [2024-10-08 16:24:46.205060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.093 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.093 "name": "raid_bdev1", 00:16:53.093 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:53.093 "strip_size_kb": 0, 00:16:53.093 "state": "online", 00:16:53.093 "raid_level": "raid1", 00:16:53.093 "superblock": true, 00:16:53.093 "num_base_bdevs": 4, 00:16:53.093 "num_base_bdevs_discovered": 3, 00:16:53.093 "num_base_bdevs_operational": 3, 00:16:53.093 "process": { 00:16:53.093 "type": "rebuild", 00:16:53.094 "target": "spare", 00:16:53.094 "progress": { 00:16:53.094 "blocks": 53248, 00:16:53.094 "percent": 83 00:16:53.094 } 00:16:53.094 }, 00:16:53.094 "base_bdevs_list": [ 00:16:53.094 { 00:16:53.094 "name": "spare", 00:16:53.094 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:53.094 "is_configured": true, 00:16:53.094 "data_offset": 2048, 00:16:53.094 "data_size": 63488 00:16:53.094 }, 00:16:53.094 { 00:16:53.094 "name": null, 00:16:53.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.094 "is_configured": false, 00:16:53.094 "data_offset": 0, 00:16:53.094 "data_size": 63488 00:16:53.094 }, 00:16:53.094 { 00:16:53.094 "name": "BaseBdev3", 00:16:53.094 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:53.094 "is_configured": true, 00:16:53.094 "data_offset": 2048, 00:16:53.094 "data_size": 63488 00:16:53.094 }, 00:16:53.094 { 00:16:53.094 "name": "BaseBdev4", 00:16:53.094 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:53.094 "is_configured": true, 00:16:53.094 "data_offset": 2048, 00:16:53.094 "data_size": 63488 00:16:53.094 } 00:16:53.094 ] 00:16:53.094 }' 00:16:53.094 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.094 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.094 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.367 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.367 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.367 [2024-10-08 16:24:46.538546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:53.367 [2024-10-08 16:24:46.539973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:53.626 82.43 IOPS, 247.29 MiB/s [2024-10-08T16:24:46.948Z] [2024-10-08 16:24:46.761802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:53.626 [2024-10-08 16:24:46.762706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:53.884 [2024-10-08 16:24:47.108855] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:54.142 [2024-10-08 16:24:47.216089] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:54.142 [2024-10-08 16:24:47.219430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.142 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.400 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.400 "name": "raid_bdev1", 00:16:54.400 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:54.400 "strip_size_kb": 0, 00:16:54.400 "state": "online", 00:16:54.400 "raid_level": "raid1", 00:16:54.400 "superblock": true, 00:16:54.400 "num_base_bdevs": 4, 00:16:54.400 "num_base_bdevs_discovered": 3, 00:16:54.400 "num_base_bdevs_operational": 3, 00:16:54.400 "base_bdevs_list": [ 00:16:54.400 { 00:16:54.400 "name": "spare", 00:16:54.400 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:54.400 "is_configured": true, 00:16:54.400 "data_offset": 2048, 00:16:54.400 "data_size": 63488 00:16:54.400 }, 00:16:54.400 { 00:16:54.400 "name": null, 00:16:54.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.400 "is_configured": false, 00:16:54.400 "data_offset": 0, 00:16:54.400 "data_size": 63488 00:16:54.400 }, 00:16:54.400 { 00:16:54.400 "name": "BaseBdev3", 00:16:54.400 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:54.401 "is_configured": true, 00:16:54.401 "data_offset": 2048, 00:16:54.401 "data_size": 63488 00:16:54.401 }, 00:16:54.401 { 00:16:54.401 "name": "BaseBdev4", 00:16:54.401 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:54.401 "is_configured": true, 00:16:54.401 "data_offset": 2048, 00:16:54.401 "data_size": 63488 00:16:54.401 } 00:16:54.401 ] 00:16:54.401 }' 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.401 "name": "raid_bdev1", 00:16:54.401 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:54.401 "strip_size_kb": 0, 00:16:54.401 "state": "online", 00:16:54.401 "raid_level": "raid1", 00:16:54.401 "superblock": true, 00:16:54.401 "num_base_bdevs": 4, 00:16:54.401 "num_base_bdevs_discovered": 3, 00:16:54.401 "num_base_bdevs_operational": 3, 00:16:54.401 "base_bdevs_list": [ 00:16:54.401 { 00:16:54.401 "name": "spare", 00:16:54.401 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:54.401 "is_configured": true, 00:16:54.401 "data_offset": 2048, 00:16:54.401 "data_size": 63488 00:16:54.401 }, 00:16:54.401 { 00:16:54.401 "name": null, 00:16:54.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.401 "is_configured": false, 00:16:54.401 "data_offset": 0, 00:16:54.401 "data_size": 63488 00:16:54.401 }, 00:16:54.401 { 00:16:54.401 "name": "BaseBdev3", 00:16:54.401 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:54.401 "is_configured": true, 00:16:54.401 "data_offset": 2048, 00:16:54.401 "data_size": 63488 00:16:54.401 }, 00:16:54.401 { 00:16:54.401 "name": "BaseBdev4", 00:16:54.401 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:54.401 "is_configured": true, 00:16:54.401 "data_offset": 2048, 00:16:54.401 "data_size": 63488 00:16:54.401 } 00:16:54.401 ] 00:16:54.401 }' 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.401 75.75 IOPS, 227.25 MiB/s [2024-10-08T16:24:47.723Z] 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.401 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.659 "name": "raid_bdev1", 00:16:54.659 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:54.659 "strip_size_kb": 0, 00:16:54.659 "state": "online", 00:16:54.659 "raid_level": "raid1", 00:16:54.659 "superblock": true, 00:16:54.659 "num_base_bdevs": 4, 00:16:54.659 "num_base_bdevs_discovered": 3, 00:16:54.659 "num_base_bdevs_operational": 3, 00:16:54.659 "base_bdevs_list": [ 00:16:54.659 { 00:16:54.659 "name": "spare", 00:16:54.659 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:54.659 "is_configured": true, 00:16:54.659 "data_offset": 2048, 00:16:54.659 "data_size": 63488 00:16:54.659 }, 00:16:54.659 { 00:16:54.659 "name": null, 00:16:54.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.659 "is_configured": false, 00:16:54.659 "data_offset": 0, 00:16:54.659 "data_size": 63488 00:16:54.659 }, 00:16:54.659 { 00:16:54.659 "name": "BaseBdev3", 00:16:54.659 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:54.659 "is_configured": true, 00:16:54.659 "data_offset": 2048, 00:16:54.659 "data_size": 63488 00:16:54.659 }, 00:16:54.659 { 00:16:54.659 "name": "BaseBdev4", 00:16:54.659 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:54.659 "is_configured": true, 00:16:54.659 "data_offset": 2048, 00:16:54.659 "data_size": 63488 00:16:54.659 } 00:16:54.659 ] 00:16:54.659 }' 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.659 16:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.917 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.918 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.918 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.189 [2024-10-08 16:24:48.242222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.189 [2024-10-08 16:24:48.242268] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.189 00:16:55.189 Latency(us) 00:16:55.189 [2024-10-08T16:24:48.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.190 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:55.190 raid_bdev1 : 8.69 71.71 215.13 0.00 0.00 18488.63 288.58 122969.37 00:16:55.190 [2024-10-08T16:24:48.512Z] =================================================================================================================== 00:16:55.190 [2024-10-08T16:24:48.512Z] Total : 71.71 215.13 0.00 0.00 18488.63 288.58 122969.37 00:16:55.190 { 00:16:55.190 "results": [ 00:16:55.190 { 00:16:55.190 "job": "raid_bdev1", 00:16:55.190 "core_mask": "0x1", 00:16:55.190 "workload": "randrw", 00:16:55.190 "percentage": 50, 00:16:55.190 "status": "finished", 00:16:55.190 "queue_depth": 2, 00:16:55.190 "io_size": 3145728, 00:16:55.190 "runtime": 8.687766, 00:16:55.190 "iops": 71.7100345474314, 00:16:55.190 "mibps": 215.13010364229422, 00:16:55.190 "io_failed": 0, 00:16:55.190 "io_timeout": 0, 00:16:55.190 "avg_latency_us": 18488.625571282653, 00:16:55.190 "min_latency_us": 288.58181818181816, 00:16:55.190 "max_latency_us": 122969.36727272728 00:16:55.190 } 00:16:55.190 ], 00:16:55.190 "core_count": 1 00:16:55.190 } 00:16:55.190 [2024-10-08 16:24:48.345606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.190 [2024-10-08 16:24:48.345679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.190 [2024-10-08 16:24:48.345801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.190 [2024-10-08 16:24:48.345822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.190 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:55.448 /dev/nbd0 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.448 1+0 records in 00:16:55.448 1+0 records out 00:16:55.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717712 s, 5.7 MB/s 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.448 16:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:56.016 /dev/nbd1 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.016 1+0 records in 00:16:56.016 1+0 records out 00:16:56.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603106 s, 6.8 MB/s 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.016 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.580 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:56.839 /dev/nbd1 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.839 1+0 records in 00:16:56.839 1+0 records out 00:16:56.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287464 s, 14.2 MB/s 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.839 16:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.839 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.121 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.380 [2024-10-08 16:24:50.652191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.380 [2024-10-08 16:24:50.652322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.380 [2024-10-08 16:24:50.652357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:57.380 [2024-10-08 16:24:50.652376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.380 [2024-10-08 16:24:50.655586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.380 [2024-10-08 16:24:50.655671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.380 [2024-10-08 16:24:50.655809] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:57.380 [2024-10-08 16:24:50.655899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.380 [2024-10-08 16:24:50.656102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.380 [2024-10-08 16:24:50.656343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.380 spare 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.380 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.638 [2024-10-08 16:24:50.756507] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:57.638 [2024-10-08 16:24:50.756621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.638 [2024-10-08 16:24:50.757148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:57.638 [2024-10-08 16:24:50.757429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:57.638 [2024-10-08 16:24:50.757459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:57.638 [2024-10-08 16:24:50.757723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.638 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.638 "name": "raid_bdev1", 00:16:57.638 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:57.638 "strip_size_kb": 0, 00:16:57.638 "state": "online", 00:16:57.638 "raid_level": "raid1", 00:16:57.638 "superblock": true, 00:16:57.639 "num_base_bdevs": 4, 00:16:57.639 "num_base_bdevs_discovered": 3, 00:16:57.639 "num_base_bdevs_operational": 3, 00:16:57.639 "base_bdevs_list": [ 00:16:57.639 { 00:16:57.639 "name": "spare", 00:16:57.639 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:57.639 "is_configured": true, 00:16:57.639 "data_offset": 2048, 00:16:57.639 "data_size": 63488 00:16:57.639 }, 00:16:57.639 { 00:16:57.639 "name": null, 00:16:57.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.639 "is_configured": false, 00:16:57.639 "data_offset": 2048, 00:16:57.639 "data_size": 63488 00:16:57.639 }, 00:16:57.639 { 00:16:57.639 "name": "BaseBdev3", 00:16:57.639 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:57.639 "is_configured": true, 00:16:57.639 "data_offset": 2048, 00:16:57.639 "data_size": 63488 00:16:57.639 }, 00:16:57.639 { 00:16:57.639 "name": "BaseBdev4", 00:16:57.639 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:57.639 "is_configured": true, 00:16:57.639 "data_offset": 2048, 00:16:57.639 "data_size": 63488 00:16:57.639 } 00:16:57.639 ] 00:16:57.639 }' 00:16:57.639 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.639 16:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.206 "name": "raid_bdev1", 00:16:58.206 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:58.206 "strip_size_kb": 0, 00:16:58.206 "state": "online", 00:16:58.206 "raid_level": "raid1", 00:16:58.206 "superblock": true, 00:16:58.206 "num_base_bdevs": 4, 00:16:58.206 "num_base_bdevs_discovered": 3, 00:16:58.206 "num_base_bdevs_operational": 3, 00:16:58.206 "base_bdevs_list": [ 00:16:58.206 { 00:16:58.206 "name": "spare", 00:16:58.206 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:58.206 "is_configured": true, 00:16:58.206 "data_offset": 2048, 00:16:58.206 "data_size": 63488 00:16:58.206 }, 00:16:58.206 { 00:16:58.206 "name": null, 00:16:58.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.206 "is_configured": false, 00:16:58.206 "data_offset": 2048, 00:16:58.206 "data_size": 63488 00:16:58.206 }, 00:16:58.206 { 00:16:58.206 "name": "BaseBdev3", 00:16:58.206 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:58.206 "is_configured": true, 00:16:58.206 "data_offset": 2048, 00:16:58.206 "data_size": 63488 00:16:58.206 }, 00:16:58.206 { 00:16:58.206 "name": "BaseBdev4", 00:16:58.206 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:58.206 "is_configured": true, 00:16:58.206 "data_offset": 2048, 00:16:58.206 "data_size": 63488 00:16:58.206 } 00:16:58.206 ] 00:16:58.206 }' 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.206 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.207 [2024-10-08 16:24:51.476700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.207 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.465 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.465 "name": "raid_bdev1", 00:16:58.465 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:58.465 "strip_size_kb": 0, 00:16:58.465 "state": "online", 00:16:58.465 "raid_level": "raid1", 00:16:58.465 "superblock": true, 00:16:58.465 "num_base_bdevs": 4, 00:16:58.465 "num_base_bdevs_discovered": 2, 00:16:58.465 "num_base_bdevs_operational": 2, 00:16:58.465 "base_bdevs_list": [ 00:16:58.465 { 00:16:58.465 "name": null, 00:16:58.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.466 "is_configured": false, 00:16:58.466 "data_offset": 0, 00:16:58.466 "data_size": 63488 00:16:58.466 }, 00:16:58.466 { 00:16:58.466 "name": null, 00:16:58.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.466 "is_configured": false, 00:16:58.466 "data_offset": 2048, 00:16:58.466 "data_size": 63488 00:16:58.466 }, 00:16:58.466 { 00:16:58.466 "name": "BaseBdev3", 00:16:58.466 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:58.466 "is_configured": true, 00:16:58.466 "data_offset": 2048, 00:16:58.466 "data_size": 63488 00:16:58.466 }, 00:16:58.466 { 00:16:58.466 "name": "BaseBdev4", 00:16:58.466 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:58.466 "is_configured": true, 00:16:58.466 "data_offset": 2048, 00:16:58.466 "data_size": 63488 00:16:58.466 } 00:16:58.466 ] 00:16:58.466 }' 00:16:58.466 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.466 16:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.745 16:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.745 16:24:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.745 16:24:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.745 [2024-10-08 16:24:52.009107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.745 [2024-10-08 16:24:52.009606] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:58.745 [2024-10-08 16:24:52.009644] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:58.745 [2024-10-08 16:24:52.009704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.745 [2024-10-08 16:24:52.022746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:58.746 16:24:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.746 16:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:58.746 [2024-10-08 16:24:52.025499] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.722 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.980 "name": "raid_bdev1", 00:16:59.980 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:16:59.980 "strip_size_kb": 0, 00:16:59.980 "state": "online", 00:16:59.980 "raid_level": "raid1", 00:16:59.980 "superblock": true, 00:16:59.980 "num_base_bdevs": 4, 00:16:59.980 "num_base_bdevs_discovered": 3, 00:16:59.980 "num_base_bdevs_operational": 3, 00:16:59.980 "process": { 00:16:59.980 "type": "rebuild", 00:16:59.980 "target": "spare", 00:16:59.980 "progress": { 00:16:59.980 "blocks": 20480, 00:16:59.980 "percent": 32 00:16:59.980 } 00:16:59.980 }, 00:16:59.980 "base_bdevs_list": [ 00:16:59.980 { 00:16:59.980 "name": "spare", 00:16:59.980 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:16:59.980 "is_configured": true, 00:16:59.980 "data_offset": 2048, 00:16:59.980 "data_size": 63488 00:16:59.980 }, 00:16:59.980 { 00:16:59.980 "name": null, 00:16:59.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.980 "is_configured": false, 00:16:59.980 "data_offset": 2048, 00:16:59.980 "data_size": 63488 00:16:59.980 }, 00:16:59.980 { 00:16:59.980 "name": "BaseBdev3", 00:16:59.980 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:16:59.980 "is_configured": true, 00:16:59.980 "data_offset": 2048, 00:16:59.980 "data_size": 63488 00:16:59.980 }, 00:16:59.980 { 00:16:59.980 "name": "BaseBdev4", 00:16:59.980 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:16:59.980 "is_configured": true, 00:16:59.980 "data_offset": 2048, 00:16:59.980 "data_size": 63488 00:16:59.980 } 00:16:59.980 ] 00:16:59.980 }' 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.980 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.980 [2024-10-08 16:24:53.195391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.980 [2024-10-08 16:24:53.235980] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:59.980 [2024-10-08 16:24:53.236127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.981 [2024-10-08 16:24:53.236154] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.981 [2024-10-08 16:24:53.236173] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.981 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.239 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.239 "name": "raid_bdev1", 00:17:00.239 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:00.239 "strip_size_kb": 0, 00:17:00.239 "state": "online", 00:17:00.239 "raid_level": "raid1", 00:17:00.239 "superblock": true, 00:17:00.239 "num_base_bdevs": 4, 00:17:00.239 "num_base_bdevs_discovered": 2, 00:17:00.239 "num_base_bdevs_operational": 2, 00:17:00.239 "base_bdevs_list": [ 00:17:00.239 { 00:17:00.239 "name": null, 00:17:00.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.239 "is_configured": false, 00:17:00.240 "data_offset": 0, 00:17:00.240 "data_size": 63488 00:17:00.240 }, 00:17:00.240 { 00:17:00.240 "name": null, 00:17:00.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.240 "is_configured": false, 00:17:00.240 "data_offset": 2048, 00:17:00.240 "data_size": 63488 00:17:00.240 }, 00:17:00.240 { 00:17:00.240 "name": "BaseBdev3", 00:17:00.240 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:00.240 "is_configured": true, 00:17:00.240 "data_offset": 2048, 00:17:00.240 "data_size": 63488 00:17:00.240 }, 00:17:00.240 { 00:17:00.240 "name": "BaseBdev4", 00:17:00.240 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:00.240 "is_configured": true, 00:17:00.240 "data_offset": 2048, 00:17:00.240 "data_size": 63488 00:17:00.240 } 00:17:00.240 ] 00:17:00.240 }' 00:17:00.240 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.240 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.498 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:00.498 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.498 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.498 [2024-10-08 16:24:53.799128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:00.498 [2024-10-08 16:24:53.799241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.498 [2024-10-08 16:24:53.799305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:00.498 [2024-10-08 16:24:53.799333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.498 [2024-10-08 16:24:53.800088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.498 [2024-10-08 16:24:53.800138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:00.498 [2024-10-08 16:24:53.800288] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:00.498 [2024-10-08 16:24:53.800318] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:00.498 [2024-10-08 16:24:53.800333] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:00.499 [2024-10-08 16:24:53.800380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.499 [2024-10-08 16:24:53.813616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:00.499 spare 00:17:00.499 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.499 16:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:00.499 [2024-10-08 16:24:53.816097] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.919 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.920 "name": "raid_bdev1", 00:17:01.920 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:01.920 "strip_size_kb": 0, 00:17:01.920 "state": "online", 00:17:01.920 "raid_level": "raid1", 00:17:01.920 "superblock": true, 00:17:01.920 "num_base_bdevs": 4, 00:17:01.920 "num_base_bdevs_discovered": 3, 00:17:01.920 "num_base_bdevs_operational": 3, 00:17:01.920 "process": { 00:17:01.920 "type": "rebuild", 00:17:01.920 "target": "spare", 00:17:01.920 "progress": { 00:17:01.920 "blocks": 20480, 00:17:01.920 "percent": 32 00:17:01.920 } 00:17:01.920 }, 00:17:01.920 "base_bdevs_list": [ 00:17:01.920 { 00:17:01.920 "name": "spare", 00:17:01.920 "uuid": "28e5bbab-894a-5e49-83d5-c8345c165fef", 00:17:01.920 "is_configured": true, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 }, 00:17:01.920 { 00:17:01.920 "name": null, 00:17:01.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.920 "is_configured": false, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 }, 00:17:01.920 { 00:17:01.920 "name": "BaseBdev3", 00:17:01.920 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:01.920 "is_configured": true, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 }, 00:17:01.920 { 00:17:01.920 "name": "BaseBdev4", 00:17:01.920 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:01.920 "is_configured": true, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 } 00:17:01.920 ] 00:17:01.920 }' 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.920 16:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.920 [2024-10-08 16:24:54.982106] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.920 [2024-10-08 16:24:55.025487] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.920 [2024-10-08 16:24:55.025618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.920 [2024-10-08 16:24:55.025651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.920 [2024-10-08 16:24:55.025663] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.920 "name": "raid_bdev1", 00:17:01.920 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:01.920 "strip_size_kb": 0, 00:17:01.920 "state": "online", 00:17:01.920 "raid_level": "raid1", 00:17:01.920 "superblock": true, 00:17:01.920 "num_base_bdevs": 4, 00:17:01.920 "num_base_bdevs_discovered": 2, 00:17:01.920 "num_base_bdevs_operational": 2, 00:17:01.920 "base_bdevs_list": [ 00:17:01.920 { 00:17:01.920 "name": null, 00:17:01.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.920 "is_configured": false, 00:17:01.920 "data_offset": 0, 00:17:01.920 "data_size": 63488 00:17:01.920 }, 00:17:01.920 { 00:17:01.920 "name": null, 00:17:01.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.920 "is_configured": false, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 }, 00:17:01.920 { 00:17:01.920 "name": "BaseBdev3", 00:17:01.920 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:01.920 "is_configured": true, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 }, 00:17:01.920 { 00:17:01.920 "name": "BaseBdev4", 00:17:01.920 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:01.920 "is_configured": true, 00:17:01.920 "data_offset": 2048, 00:17:01.920 "data_size": 63488 00:17:01.920 } 00:17:01.920 ] 00:17:01.920 }' 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.920 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.487 "name": "raid_bdev1", 00:17:02.487 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:02.487 "strip_size_kb": 0, 00:17:02.487 "state": "online", 00:17:02.487 "raid_level": "raid1", 00:17:02.487 "superblock": true, 00:17:02.487 "num_base_bdevs": 4, 00:17:02.487 "num_base_bdevs_discovered": 2, 00:17:02.487 "num_base_bdevs_operational": 2, 00:17:02.487 "base_bdevs_list": [ 00:17:02.487 { 00:17:02.487 "name": null, 00:17:02.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.487 "is_configured": false, 00:17:02.487 "data_offset": 0, 00:17:02.487 "data_size": 63488 00:17:02.487 }, 00:17:02.487 { 00:17:02.487 "name": null, 00:17:02.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.487 "is_configured": false, 00:17:02.487 "data_offset": 2048, 00:17:02.487 "data_size": 63488 00:17:02.487 }, 00:17:02.487 { 00:17:02.487 "name": "BaseBdev3", 00:17:02.487 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:02.487 "is_configured": true, 00:17:02.487 "data_offset": 2048, 00:17:02.487 "data_size": 63488 00:17:02.487 }, 00:17:02.487 { 00:17:02.487 "name": "BaseBdev4", 00:17:02.487 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:02.487 "is_configured": true, 00:17:02.487 "data_offset": 2048, 00:17:02.487 "data_size": 63488 00:17:02.487 } 00:17:02.487 ] 00:17:02.487 }' 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.487 [2024-10-08 16:24:55.684430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.487 [2024-10-08 16:24:55.684501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.487 [2024-10-08 16:24:55.684550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:02.487 [2024-10-08 16:24:55.684568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.487 [2024-10-08 16:24:55.685124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.487 [2024-10-08 16:24:55.685168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.487 [2024-10-08 16:24:55.685271] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:02.487 [2024-10-08 16:24:55.685292] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:02.487 [2024-10-08 16:24:55.685307] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:02.487 [2024-10-08 16:24:55.685320] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:02.487 BaseBdev1 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.487 16:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.421 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.680 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.680 "name": "raid_bdev1", 00:17:03.680 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:03.680 "strip_size_kb": 0, 00:17:03.680 "state": "online", 00:17:03.680 "raid_level": "raid1", 00:17:03.680 "superblock": true, 00:17:03.680 "num_base_bdevs": 4, 00:17:03.680 "num_base_bdevs_discovered": 2, 00:17:03.680 "num_base_bdevs_operational": 2, 00:17:03.680 "base_bdevs_list": [ 00:17:03.680 { 00:17:03.680 "name": null, 00:17:03.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.680 "is_configured": false, 00:17:03.680 "data_offset": 0, 00:17:03.680 "data_size": 63488 00:17:03.680 }, 00:17:03.680 { 00:17:03.680 "name": null, 00:17:03.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.680 "is_configured": false, 00:17:03.680 "data_offset": 2048, 00:17:03.680 "data_size": 63488 00:17:03.680 }, 00:17:03.680 { 00:17:03.680 "name": "BaseBdev3", 00:17:03.680 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:03.680 "is_configured": true, 00:17:03.680 "data_offset": 2048, 00:17:03.680 "data_size": 63488 00:17:03.680 }, 00:17:03.680 { 00:17:03.680 "name": "BaseBdev4", 00:17:03.680 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:03.680 "is_configured": true, 00:17:03.680 "data_offset": 2048, 00:17:03.680 "data_size": 63488 00:17:03.680 } 00:17:03.680 ] 00:17:03.680 }' 00:17:03.680 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.680 16:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.939 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.198 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.198 "name": "raid_bdev1", 00:17:04.198 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:04.198 "strip_size_kb": 0, 00:17:04.198 "state": "online", 00:17:04.198 "raid_level": "raid1", 00:17:04.198 "superblock": true, 00:17:04.198 "num_base_bdevs": 4, 00:17:04.198 "num_base_bdevs_discovered": 2, 00:17:04.198 "num_base_bdevs_operational": 2, 00:17:04.198 "base_bdevs_list": [ 00:17:04.198 { 00:17:04.198 "name": null, 00:17:04.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.198 "is_configured": false, 00:17:04.198 "data_offset": 0, 00:17:04.198 "data_size": 63488 00:17:04.198 }, 00:17:04.198 { 00:17:04.198 "name": null, 00:17:04.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.198 "is_configured": false, 00:17:04.198 "data_offset": 2048, 00:17:04.198 "data_size": 63488 00:17:04.198 }, 00:17:04.198 { 00:17:04.198 "name": "BaseBdev3", 00:17:04.198 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:04.198 "is_configured": true, 00:17:04.198 "data_offset": 2048, 00:17:04.198 "data_size": 63488 00:17:04.198 }, 00:17:04.198 { 00:17:04.198 "name": "BaseBdev4", 00:17:04.198 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:04.198 "is_configured": true, 00:17:04.198 "data_offset": 2048, 00:17:04.198 "data_size": 63488 00:17:04.198 } 00:17:04.198 ] 00:17:04.198 }' 00:17:04.198 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.198 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.198 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.199 [2024-10-08 16:24:57.409327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.199 [2024-10-08 16:24:57.409740] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:04.199 [2024-10-08 16:24:57.409782] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:04.199 request: 00:17:04.199 { 00:17:04.199 "base_bdev": "BaseBdev1", 00:17:04.199 "raid_bdev": "raid_bdev1", 00:17:04.199 "method": "bdev_raid_add_base_bdev", 00:17:04.199 "req_id": 1 00:17:04.199 } 00:17:04.199 Got JSON-RPC error response 00:17:04.199 response: 00:17:04.199 { 00:17:04.199 "code": -22, 00:17:04.199 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:04.199 } 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.199 16:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.186 "name": "raid_bdev1", 00:17:05.186 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:05.186 "strip_size_kb": 0, 00:17:05.186 "state": "online", 00:17:05.186 "raid_level": "raid1", 00:17:05.186 "superblock": true, 00:17:05.186 "num_base_bdevs": 4, 00:17:05.186 "num_base_bdevs_discovered": 2, 00:17:05.186 "num_base_bdevs_operational": 2, 00:17:05.186 "base_bdevs_list": [ 00:17:05.186 { 00:17:05.186 "name": null, 00:17:05.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.186 "is_configured": false, 00:17:05.186 "data_offset": 0, 00:17:05.186 "data_size": 63488 00:17:05.186 }, 00:17:05.186 { 00:17:05.186 "name": null, 00:17:05.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.186 "is_configured": false, 00:17:05.186 "data_offset": 2048, 00:17:05.186 "data_size": 63488 00:17:05.186 }, 00:17:05.186 { 00:17:05.186 "name": "BaseBdev3", 00:17:05.186 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:05.186 "is_configured": true, 00:17:05.186 "data_offset": 2048, 00:17:05.186 "data_size": 63488 00:17:05.186 }, 00:17:05.186 { 00:17:05.186 "name": "BaseBdev4", 00:17:05.186 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:05.186 "is_configured": true, 00:17:05.186 "data_offset": 2048, 00:17:05.186 "data_size": 63488 00:17:05.186 } 00:17:05.186 ] 00:17:05.186 }' 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.186 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.752 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.752 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.753 "name": "raid_bdev1", 00:17:05.753 "uuid": "dcb1fdac-d1dd-4bb0-9a0a-766fd04771e8", 00:17:05.753 "strip_size_kb": 0, 00:17:05.753 "state": "online", 00:17:05.753 "raid_level": "raid1", 00:17:05.753 "superblock": true, 00:17:05.753 "num_base_bdevs": 4, 00:17:05.753 "num_base_bdevs_discovered": 2, 00:17:05.753 "num_base_bdevs_operational": 2, 00:17:05.753 "base_bdevs_list": [ 00:17:05.753 { 00:17:05.753 "name": null, 00:17:05.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.753 "is_configured": false, 00:17:05.753 "data_offset": 0, 00:17:05.753 "data_size": 63488 00:17:05.753 }, 00:17:05.753 { 00:17:05.753 "name": null, 00:17:05.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.753 "is_configured": false, 00:17:05.753 "data_offset": 2048, 00:17:05.753 "data_size": 63488 00:17:05.753 }, 00:17:05.753 { 00:17:05.753 "name": "BaseBdev3", 00:17:05.753 "uuid": "d5059b2a-dc07-538a-98c9-c4866ce53c84", 00:17:05.753 "is_configured": true, 00:17:05.753 "data_offset": 2048, 00:17:05.753 "data_size": 63488 00:17:05.753 }, 00:17:05.753 { 00:17:05.753 "name": "BaseBdev4", 00:17:05.753 "uuid": "e28270e3-554f-50a4-ab44-8ee0ca86d81c", 00:17:05.753 "is_configured": true, 00:17:05.753 "data_offset": 2048, 00:17:05.753 "data_size": 63488 00:17:05.753 } 00:17:05.753 ] 00:17:05.753 }' 00:17:05.753 16:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.753 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.753 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.010 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.010 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79838 00:17:06.010 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79838 ']' 00:17:06.010 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79838 00:17:06.010 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79838 00:17:06.011 killing process with pid 79838 00:17:06.011 Received shutdown signal, test time was about 19.493604 seconds 00:17:06.011 00:17:06.011 Latency(us) 00:17:06.011 [2024-10-08T16:24:59.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.011 [2024-10-08T16:24:59.333Z] =================================================================================================================== 00:17:06.011 [2024-10-08T16:24:59.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79838' 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79838 00:17:06.011 [2024-10-08 16:24:59.131950] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:06.011 16:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79838 00:17:06.011 [2024-10-08 16:24:59.132128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.011 [2024-10-08 16:24:59.132238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.011 [2024-10-08 16:24:59.132260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:06.269 [2024-10-08 16:24:59.520477] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:07.643 16:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:07.643 ************************************ 00:17:07.643 END TEST raid_rebuild_test_sb_io 00:17:07.643 ************************************ 00:17:07.643 00:17:07.643 real 0m23.357s 00:17:07.643 user 0m31.544s 00:17:07.643 sys 0m2.505s 00:17:07.643 16:25:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.643 16:25:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.643 16:25:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:07.643 16:25:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:07.643 16:25:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:07.643 16:25:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.643 16:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.644 ************************************ 00:17:07.644 START TEST raid5f_state_function_test 00:17:07.644 ************************************ 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:07.644 Process raid pid: 80590 00:17:07.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80590 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80590' 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80590 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80590 ']' 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.644 16:25:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.905 [2024-10-08 16:25:00.986714] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:17:07.905 [2024-10-08 16:25:00.986913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.905 [2024-10-08 16:25:01.164341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.162 [2024-10-08 16:25:01.458538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.420 [2024-10-08 16:25:01.666965] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.420 [2024-10-08 16:25:01.667026] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.679 16:25:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:08.679 16:25:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:08.679 16:25:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:08.679 16:25:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 16:25:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.937 [2024-10-08 16:25:02.001761] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.937 [2024-10-08 16:25:02.001847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.937 [2024-10-08 16:25:02.001865] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.937 [2024-10-08 16:25:02.001882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.937 [2024-10-08 16:25:02.001892] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.937 [2024-10-08 16:25:02.001906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.937 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.937 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:08.937 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.937 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.937 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.938 "name": "Existed_Raid", 00:17:08.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.938 "strip_size_kb": 64, 00:17:08.938 "state": "configuring", 00:17:08.938 "raid_level": "raid5f", 00:17:08.938 "superblock": false, 00:17:08.938 "num_base_bdevs": 3, 00:17:08.938 "num_base_bdevs_discovered": 0, 00:17:08.938 "num_base_bdevs_operational": 3, 00:17:08.938 "base_bdevs_list": [ 00:17:08.938 { 00:17:08.938 "name": "BaseBdev1", 00:17:08.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.938 "is_configured": false, 00:17:08.938 "data_offset": 0, 00:17:08.938 "data_size": 0 00:17:08.938 }, 00:17:08.938 { 00:17:08.938 "name": "BaseBdev2", 00:17:08.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.938 "is_configured": false, 00:17:08.938 "data_offset": 0, 00:17:08.938 "data_size": 0 00:17:08.938 }, 00:17:08.938 { 00:17:08.938 "name": "BaseBdev3", 00:17:08.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.938 "is_configured": false, 00:17:08.938 "data_offset": 0, 00:17:08.938 "data_size": 0 00:17:08.938 } 00:17:08.938 ] 00:17:08.938 }' 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.938 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 [2024-10-08 16:25:02.537764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.505 [2024-10-08 16:25:02.537957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 [2024-10-08 16:25:02.545765] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.505 [2024-10-08 16:25:02.545948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.505 [2024-10-08 16:25:02.546075] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.505 [2024-10-08 16:25:02.546137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.505 [2024-10-08 16:25:02.546255] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.505 [2024-10-08 16:25:02.546314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 [2024-10-08 16:25:02.607816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.505 BaseBdev1 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 [ 00:17:09.505 { 00:17:09.505 "name": "BaseBdev1", 00:17:09.505 "aliases": [ 00:17:09.505 "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e" 00:17:09.505 ], 00:17:09.505 "product_name": "Malloc disk", 00:17:09.505 "block_size": 512, 00:17:09.505 "num_blocks": 65536, 00:17:09.505 "uuid": "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e", 00:17:09.505 "assigned_rate_limits": { 00:17:09.505 "rw_ios_per_sec": 0, 00:17:09.505 "rw_mbytes_per_sec": 0, 00:17:09.505 "r_mbytes_per_sec": 0, 00:17:09.505 "w_mbytes_per_sec": 0 00:17:09.505 }, 00:17:09.505 "claimed": true, 00:17:09.505 "claim_type": "exclusive_write", 00:17:09.505 "zoned": false, 00:17:09.505 "supported_io_types": { 00:17:09.505 "read": true, 00:17:09.505 "write": true, 00:17:09.505 "unmap": true, 00:17:09.505 "flush": true, 00:17:09.505 "reset": true, 00:17:09.505 "nvme_admin": false, 00:17:09.505 "nvme_io": false, 00:17:09.505 "nvme_io_md": false, 00:17:09.505 "write_zeroes": true, 00:17:09.505 "zcopy": true, 00:17:09.505 "get_zone_info": false, 00:17:09.505 "zone_management": false, 00:17:09.505 "zone_append": false, 00:17:09.505 "compare": false, 00:17:09.505 "compare_and_write": false, 00:17:09.505 "abort": true, 00:17:09.505 "seek_hole": false, 00:17:09.505 "seek_data": false, 00:17:09.505 "copy": true, 00:17:09.505 "nvme_iov_md": false 00:17:09.505 }, 00:17:09.505 "memory_domains": [ 00:17:09.505 { 00:17:09.505 "dma_device_id": "system", 00:17:09.505 "dma_device_type": 1 00:17:09.505 }, 00:17:09.505 { 00:17:09.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.505 "dma_device_type": 2 00:17:09.505 } 00:17:09.505 ], 00:17:09.505 "driver_specific": {} 00:17:09.505 } 00:17:09.505 ] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.505 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.505 "name": "Existed_Raid", 00:17:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.505 "strip_size_kb": 64, 00:17:09.505 "state": "configuring", 00:17:09.505 "raid_level": "raid5f", 00:17:09.505 "superblock": false, 00:17:09.505 "num_base_bdevs": 3, 00:17:09.505 "num_base_bdevs_discovered": 1, 00:17:09.505 "num_base_bdevs_operational": 3, 00:17:09.505 "base_bdevs_list": [ 00:17:09.505 { 00:17:09.505 "name": "BaseBdev1", 00:17:09.505 "uuid": "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e", 00:17:09.505 "is_configured": true, 00:17:09.506 "data_offset": 0, 00:17:09.506 "data_size": 65536 00:17:09.506 }, 00:17:09.506 { 00:17:09.506 "name": "BaseBdev2", 00:17:09.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.506 "is_configured": false, 00:17:09.506 "data_offset": 0, 00:17:09.506 "data_size": 0 00:17:09.506 }, 00:17:09.506 { 00:17:09.506 "name": "BaseBdev3", 00:17:09.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.506 "is_configured": false, 00:17:09.506 "data_offset": 0, 00:17:09.506 "data_size": 0 00:17:09.506 } 00:17:09.506 ] 00:17:09.506 }' 00:17:09.506 16:25:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.506 16:25:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.072 [2024-10-08 16:25:03.184053] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.072 [2024-10-08 16:25:03.184119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.072 [2024-10-08 16:25:03.192052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.072 [2024-10-08 16:25:03.194864] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.072 [2024-10-08 16:25:03.194928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.072 [2024-10-08 16:25:03.194945] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:10.072 [2024-10-08 16:25:03.194969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.072 "name": "Existed_Raid", 00:17:10.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.072 "strip_size_kb": 64, 00:17:10.072 "state": "configuring", 00:17:10.072 "raid_level": "raid5f", 00:17:10.072 "superblock": false, 00:17:10.072 "num_base_bdevs": 3, 00:17:10.072 "num_base_bdevs_discovered": 1, 00:17:10.072 "num_base_bdevs_operational": 3, 00:17:10.072 "base_bdevs_list": [ 00:17:10.072 { 00:17:10.072 "name": "BaseBdev1", 00:17:10.072 "uuid": "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e", 00:17:10.072 "is_configured": true, 00:17:10.072 "data_offset": 0, 00:17:10.072 "data_size": 65536 00:17:10.072 }, 00:17:10.072 { 00:17:10.072 "name": "BaseBdev2", 00:17:10.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.072 "is_configured": false, 00:17:10.072 "data_offset": 0, 00:17:10.072 "data_size": 0 00:17:10.072 }, 00:17:10.072 { 00:17:10.072 "name": "BaseBdev3", 00:17:10.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.072 "is_configured": false, 00:17:10.072 "data_offset": 0, 00:17:10.072 "data_size": 0 00:17:10.072 } 00:17:10.072 ] 00:17:10.072 }' 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.072 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.639 [2024-10-08 16:25:03.751769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.639 BaseBdev2 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:10.639 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.640 [ 00:17:10.640 { 00:17:10.640 "name": "BaseBdev2", 00:17:10.640 "aliases": [ 00:17:10.640 "43023d90-c3bb-4f2a-a787-84798b633c22" 00:17:10.640 ], 00:17:10.640 "product_name": "Malloc disk", 00:17:10.640 "block_size": 512, 00:17:10.640 "num_blocks": 65536, 00:17:10.640 "uuid": "43023d90-c3bb-4f2a-a787-84798b633c22", 00:17:10.640 "assigned_rate_limits": { 00:17:10.640 "rw_ios_per_sec": 0, 00:17:10.640 "rw_mbytes_per_sec": 0, 00:17:10.640 "r_mbytes_per_sec": 0, 00:17:10.640 "w_mbytes_per_sec": 0 00:17:10.640 }, 00:17:10.640 "claimed": true, 00:17:10.640 "claim_type": "exclusive_write", 00:17:10.640 "zoned": false, 00:17:10.640 "supported_io_types": { 00:17:10.640 "read": true, 00:17:10.640 "write": true, 00:17:10.640 "unmap": true, 00:17:10.640 "flush": true, 00:17:10.640 "reset": true, 00:17:10.640 "nvme_admin": false, 00:17:10.640 "nvme_io": false, 00:17:10.640 "nvme_io_md": false, 00:17:10.640 "write_zeroes": true, 00:17:10.640 "zcopy": true, 00:17:10.640 "get_zone_info": false, 00:17:10.640 "zone_management": false, 00:17:10.640 "zone_append": false, 00:17:10.640 "compare": false, 00:17:10.640 "compare_and_write": false, 00:17:10.640 "abort": true, 00:17:10.640 "seek_hole": false, 00:17:10.640 "seek_data": false, 00:17:10.640 "copy": true, 00:17:10.640 "nvme_iov_md": false 00:17:10.640 }, 00:17:10.640 "memory_domains": [ 00:17:10.640 { 00:17:10.640 "dma_device_id": "system", 00:17:10.640 "dma_device_type": 1 00:17:10.640 }, 00:17:10.640 { 00:17:10.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.640 "dma_device_type": 2 00:17:10.640 } 00:17:10.640 ], 00:17:10.640 "driver_specific": {} 00:17:10.640 } 00:17:10.640 ] 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.640 "name": "Existed_Raid", 00:17:10.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.640 "strip_size_kb": 64, 00:17:10.640 "state": "configuring", 00:17:10.640 "raid_level": "raid5f", 00:17:10.640 "superblock": false, 00:17:10.640 "num_base_bdevs": 3, 00:17:10.640 "num_base_bdevs_discovered": 2, 00:17:10.640 "num_base_bdevs_operational": 3, 00:17:10.640 "base_bdevs_list": [ 00:17:10.640 { 00:17:10.640 "name": "BaseBdev1", 00:17:10.640 "uuid": "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e", 00:17:10.640 "is_configured": true, 00:17:10.640 "data_offset": 0, 00:17:10.640 "data_size": 65536 00:17:10.640 }, 00:17:10.640 { 00:17:10.640 "name": "BaseBdev2", 00:17:10.640 "uuid": "43023d90-c3bb-4f2a-a787-84798b633c22", 00:17:10.640 "is_configured": true, 00:17:10.640 "data_offset": 0, 00:17:10.640 "data_size": 65536 00:17:10.640 }, 00:17:10.640 { 00:17:10.640 "name": "BaseBdev3", 00:17:10.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.640 "is_configured": false, 00:17:10.640 "data_offset": 0, 00:17:10.640 "data_size": 0 00:17:10.640 } 00:17:10.640 ] 00:17:10.640 }' 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.640 16:25:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.207 [2024-10-08 16:25:04.375574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.207 [2024-10-08 16:25:04.375670] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:11.207 [2024-10-08 16:25:04.375695] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:11.207 [2024-10-08 16:25:04.376040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:11.207 [2024-10-08 16:25:04.381521] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:11.207 [2024-10-08 16:25:04.381602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:11.207 [2024-10-08 16:25:04.381949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.207 BaseBdev3 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.207 [ 00:17:11.207 { 00:17:11.207 "name": "BaseBdev3", 00:17:11.207 "aliases": [ 00:17:11.207 "ff1f5961-5fd2-4b60-afd4-13a701a46879" 00:17:11.207 ], 00:17:11.207 "product_name": "Malloc disk", 00:17:11.207 "block_size": 512, 00:17:11.207 "num_blocks": 65536, 00:17:11.207 "uuid": "ff1f5961-5fd2-4b60-afd4-13a701a46879", 00:17:11.207 "assigned_rate_limits": { 00:17:11.207 "rw_ios_per_sec": 0, 00:17:11.207 "rw_mbytes_per_sec": 0, 00:17:11.207 "r_mbytes_per_sec": 0, 00:17:11.207 "w_mbytes_per_sec": 0 00:17:11.207 }, 00:17:11.207 "claimed": true, 00:17:11.207 "claim_type": "exclusive_write", 00:17:11.207 "zoned": false, 00:17:11.207 "supported_io_types": { 00:17:11.207 "read": true, 00:17:11.207 "write": true, 00:17:11.207 "unmap": true, 00:17:11.207 "flush": true, 00:17:11.207 "reset": true, 00:17:11.207 "nvme_admin": false, 00:17:11.207 "nvme_io": false, 00:17:11.207 "nvme_io_md": false, 00:17:11.207 "write_zeroes": true, 00:17:11.207 "zcopy": true, 00:17:11.207 "get_zone_info": false, 00:17:11.207 "zone_management": false, 00:17:11.207 "zone_append": false, 00:17:11.207 "compare": false, 00:17:11.207 "compare_and_write": false, 00:17:11.207 "abort": true, 00:17:11.207 "seek_hole": false, 00:17:11.207 "seek_data": false, 00:17:11.207 "copy": true, 00:17:11.207 "nvme_iov_md": false 00:17:11.207 }, 00:17:11.207 "memory_domains": [ 00:17:11.207 { 00:17:11.207 "dma_device_id": "system", 00:17:11.207 "dma_device_type": 1 00:17:11.207 }, 00:17:11.207 { 00:17:11.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.207 "dma_device_type": 2 00:17:11.207 } 00:17:11.207 ], 00:17:11.207 "driver_specific": {} 00:17:11.207 } 00:17:11.207 ] 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.207 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.208 "name": "Existed_Raid", 00:17:11.208 "uuid": "1b3eaebe-f4e6-4831-bb0c-723596c15982", 00:17:11.208 "strip_size_kb": 64, 00:17:11.208 "state": "online", 00:17:11.208 "raid_level": "raid5f", 00:17:11.208 "superblock": false, 00:17:11.208 "num_base_bdevs": 3, 00:17:11.208 "num_base_bdevs_discovered": 3, 00:17:11.208 "num_base_bdevs_operational": 3, 00:17:11.208 "base_bdevs_list": [ 00:17:11.208 { 00:17:11.208 "name": "BaseBdev1", 00:17:11.208 "uuid": "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e", 00:17:11.208 "is_configured": true, 00:17:11.208 "data_offset": 0, 00:17:11.208 "data_size": 65536 00:17:11.208 }, 00:17:11.208 { 00:17:11.208 "name": "BaseBdev2", 00:17:11.208 "uuid": "43023d90-c3bb-4f2a-a787-84798b633c22", 00:17:11.208 "is_configured": true, 00:17:11.208 "data_offset": 0, 00:17:11.208 "data_size": 65536 00:17:11.208 }, 00:17:11.208 { 00:17:11.208 "name": "BaseBdev3", 00:17:11.208 "uuid": "ff1f5961-5fd2-4b60-afd4-13a701a46879", 00:17:11.208 "is_configured": true, 00:17:11.208 "data_offset": 0, 00:17:11.208 "data_size": 65536 00:17:11.208 } 00:17:11.208 ] 00:17:11.208 }' 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.208 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:11.774 [2024-10-08 16:25:04.956157] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.774 16:25:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.774 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:11.774 "name": "Existed_Raid", 00:17:11.774 "aliases": [ 00:17:11.774 "1b3eaebe-f4e6-4831-bb0c-723596c15982" 00:17:11.774 ], 00:17:11.774 "product_name": "Raid Volume", 00:17:11.774 "block_size": 512, 00:17:11.774 "num_blocks": 131072, 00:17:11.774 "uuid": "1b3eaebe-f4e6-4831-bb0c-723596c15982", 00:17:11.774 "assigned_rate_limits": { 00:17:11.774 "rw_ios_per_sec": 0, 00:17:11.774 "rw_mbytes_per_sec": 0, 00:17:11.774 "r_mbytes_per_sec": 0, 00:17:11.774 "w_mbytes_per_sec": 0 00:17:11.774 }, 00:17:11.774 "claimed": false, 00:17:11.774 "zoned": false, 00:17:11.774 "supported_io_types": { 00:17:11.774 "read": true, 00:17:11.774 "write": true, 00:17:11.774 "unmap": false, 00:17:11.774 "flush": false, 00:17:11.774 "reset": true, 00:17:11.774 "nvme_admin": false, 00:17:11.774 "nvme_io": false, 00:17:11.774 "nvme_io_md": false, 00:17:11.774 "write_zeroes": true, 00:17:11.774 "zcopy": false, 00:17:11.774 "get_zone_info": false, 00:17:11.774 "zone_management": false, 00:17:11.774 "zone_append": false, 00:17:11.774 "compare": false, 00:17:11.774 "compare_and_write": false, 00:17:11.774 "abort": false, 00:17:11.774 "seek_hole": false, 00:17:11.774 "seek_data": false, 00:17:11.774 "copy": false, 00:17:11.774 "nvme_iov_md": false 00:17:11.774 }, 00:17:11.774 "driver_specific": { 00:17:11.774 "raid": { 00:17:11.774 "uuid": "1b3eaebe-f4e6-4831-bb0c-723596c15982", 00:17:11.774 "strip_size_kb": 64, 00:17:11.774 "state": "online", 00:17:11.774 "raid_level": "raid5f", 00:17:11.774 "superblock": false, 00:17:11.774 "num_base_bdevs": 3, 00:17:11.774 "num_base_bdevs_discovered": 3, 00:17:11.774 "num_base_bdevs_operational": 3, 00:17:11.774 "base_bdevs_list": [ 00:17:11.774 { 00:17:11.774 "name": "BaseBdev1", 00:17:11.774 "uuid": "cfa8351d-1c7c-4170-aef6-a0b0812d2b2e", 00:17:11.774 "is_configured": true, 00:17:11.774 "data_offset": 0, 00:17:11.774 "data_size": 65536 00:17:11.774 }, 00:17:11.774 { 00:17:11.774 "name": "BaseBdev2", 00:17:11.774 "uuid": "43023d90-c3bb-4f2a-a787-84798b633c22", 00:17:11.774 "is_configured": true, 00:17:11.774 "data_offset": 0, 00:17:11.774 "data_size": 65536 00:17:11.774 }, 00:17:11.774 { 00:17:11.774 "name": "BaseBdev3", 00:17:11.774 "uuid": "ff1f5961-5fd2-4b60-afd4-13a701a46879", 00:17:11.774 "is_configured": true, 00:17:11.774 "data_offset": 0, 00:17:11.774 "data_size": 65536 00:17:11.774 } 00:17:11.774 ] 00:17:11.774 } 00:17:11.774 } 00:17:11.774 }' 00:17:11.774 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:11.774 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:11.774 BaseBdev2 00:17:11.774 BaseBdev3' 00:17:11.774 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.033 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.033 [2024-10-08 16:25:05.291976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.292 "name": "Existed_Raid", 00:17:12.292 "uuid": "1b3eaebe-f4e6-4831-bb0c-723596c15982", 00:17:12.292 "strip_size_kb": 64, 00:17:12.292 "state": "online", 00:17:12.292 "raid_level": "raid5f", 00:17:12.292 "superblock": false, 00:17:12.292 "num_base_bdevs": 3, 00:17:12.292 "num_base_bdevs_discovered": 2, 00:17:12.292 "num_base_bdevs_operational": 2, 00:17:12.292 "base_bdevs_list": [ 00:17:12.292 { 00:17:12.292 "name": null, 00:17:12.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.292 "is_configured": false, 00:17:12.292 "data_offset": 0, 00:17:12.292 "data_size": 65536 00:17:12.292 }, 00:17:12.292 { 00:17:12.292 "name": "BaseBdev2", 00:17:12.292 "uuid": "43023d90-c3bb-4f2a-a787-84798b633c22", 00:17:12.292 "is_configured": true, 00:17:12.292 "data_offset": 0, 00:17:12.292 "data_size": 65536 00:17:12.292 }, 00:17:12.292 { 00:17:12.292 "name": "BaseBdev3", 00:17:12.292 "uuid": "ff1f5961-5fd2-4b60-afd4-13a701a46879", 00:17:12.292 "is_configured": true, 00:17:12.292 "data_offset": 0, 00:17:12.292 "data_size": 65536 00:17:12.292 } 00:17:12.292 ] 00:17:12.292 }' 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.292 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.860 16:25:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 [2024-10-08 16:25:05.929679] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:12.860 [2024-10-08 16:25:05.929807] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.860 [2024-10-08 16:25:06.010967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 [2024-10-08 16:25:06.070991] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.860 [2024-10-08 16:25:06.071081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.119 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:13.119 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:13.119 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:13.119 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:13.119 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 BaseBdev2 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 [ 00:17:13.120 { 00:17:13.120 "name": "BaseBdev2", 00:17:13.120 "aliases": [ 00:17:13.120 "640a9d97-a7f1-40da-bad2-86b8c63a6c3e" 00:17:13.120 ], 00:17:13.120 "product_name": "Malloc disk", 00:17:13.120 "block_size": 512, 00:17:13.120 "num_blocks": 65536, 00:17:13.120 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:13.120 "assigned_rate_limits": { 00:17:13.120 "rw_ios_per_sec": 0, 00:17:13.120 "rw_mbytes_per_sec": 0, 00:17:13.120 "r_mbytes_per_sec": 0, 00:17:13.120 "w_mbytes_per_sec": 0 00:17:13.120 }, 00:17:13.120 "claimed": false, 00:17:13.120 "zoned": false, 00:17:13.120 "supported_io_types": { 00:17:13.120 "read": true, 00:17:13.120 "write": true, 00:17:13.120 "unmap": true, 00:17:13.120 "flush": true, 00:17:13.120 "reset": true, 00:17:13.120 "nvme_admin": false, 00:17:13.120 "nvme_io": false, 00:17:13.120 "nvme_io_md": false, 00:17:13.120 "write_zeroes": true, 00:17:13.120 "zcopy": true, 00:17:13.120 "get_zone_info": false, 00:17:13.120 "zone_management": false, 00:17:13.120 "zone_append": false, 00:17:13.120 "compare": false, 00:17:13.120 "compare_and_write": false, 00:17:13.120 "abort": true, 00:17:13.120 "seek_hole": false, 00:17:13.120 "seek_data": false, 00:17:13.120 "copy": true, 00:17:13.120 "nvme_iov_md": false 00:17:13.120 }, 00:17:13.120 "memory_domains": [ 00:17:13.120 { 00:17:13.120 "dma_device_id": "system", 00:17:13.120 "dma_device_type": 1 00:17:13.120 }, 00:17:13.120 { 00:17:13.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.120 "dma_device_type": 2 00:17:13.120 } 00:17:13.120 ], 00:17:13.120 "driver_specific": {} 00:17:13.120 } 00:17:13.120 ] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 BaseBdev3 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 [ 00:17:13.120 { 00:17:13.120 "name": "BaseBdev3", 00:17:13.120 "aliases": [ 00:17:13.120 "183c5448-7c4d-4ebb-945b-96d57767873a" 00:17:13.120 ], 00:17:13.120 "product_name": "Malloc disk", 00:17:13.120 "block_size": 512, 00:17:13.120 "num_blocks": 65536, 00:17:13.120 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:13.120 "assigned_rate_limits": { 00:17:13.120 "rw_ios_per_sec": 0, 00:17:13.120 "rw_mbytes_per_sec": 0, 00:17:13.120 "r_mbytes_per_sec": 0, 00:17:13.120 "w_mbytes_per_sec": 0 00:17:13.120 }, 00:17:13.120 "claimed": false, 00:17:13.120 "zoned": false, 00:17:13.120 "supported_io_types": { 00:17:13.120 "read": true, 00:17:13.120 "write": true, 00:17:13.120 "unmap": true, 00:17:13.120 "flush": true, 00:17:13.120 "reset": true, 00:17:13.120 "nvme_admin": false, 00:17:13.120 "nvme_io": false, 00:17:13.120 "nvme_io_md": false, 00:17:13.120 "write_zeroes": true, 00:17:13.120 "zcopy": true, 00:17:13.120 "get_zone_info": false, 00:17:13.120 "zone_management": false, 00:17:13.120 "zone_append": false, 00:17:13.120 "compare": false, 00:17:13.120 "compare_and_write": false, 00:17:13.120 "abort": true, 00:17:13.120 "seek_hole": false, 00:17:13.120 "seek_data": false, 00:17:13.120 "copy": true, 00:17:13.120 "nvme_iov_md": false 00:17:13.120 }, 00:17:13.120 "memory_domains": [ 00:17:13.120 { 00:17:13.120 "dma_device_id": "system", 00:17:13.120 "dma_device_type": 1 00:17:13.120 }, 00:17:13.120 { 00:17:13.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.120 "dma_device_type": 2 00:17:13.120 } 00:17:13.120 ], 00:17:13.120 "driver_specific": {} 00:17:13.120 } 00:17:13.120 ] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 [2024-10-08 16:25:06.369299] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.120 [2024-10-08 16:25:06.369512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.120 [2024-10-08 16:25:06.369695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.120 [2024-10-08 16:25:06.372279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.120 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.120 "name": "Existed_Raid", 00:17:13.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.120 "strip_size_kb": 64, 00:17:13.120 "state": "configuring", 00:17:13.120 "raid_level": "raid5f", 00:17:13.120 "superblock": false, 00:17:13.120 "num_base_bdevs": 3, 00:17:13.120 "num_base_bdevs_discovered": 2, 00:17:13.120 "num_base_bdevs_operational": 3, 00:17:13.120 "base_bdevs_list": [ 00:17:13.120 { 00:17:13.121 "name": "BaseBdev1", 00:17:13.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.121 "is_configured": false, 00:17:13.121 "data_offset": 0, 00:17:13.121 "data_size": 0 00:17:13.121 }, 00:17:13.121 { 00:17:13.121 "name": "BaseBdev2", 00:17:13.121 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:13.121 "is_configured": true, 00:17:13.121 "data_offset": 0, 00:17:13.121 "data_size": 65536 00:17:13.121 }, 00:17:13.121 { 00:17:13.121 "name": "BaseBdev3", 00:17:13.121 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:13.121 "is_configured": true, 00:17:13.121 "data_offset": 0, 00:17:13.121 "data_size": 65536 00:17:13.121 } 00:17:13.121 ] 00:17:13.121 }' 00:17:13.121 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.121 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.686 [2024-10-08 16:25:06.917492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.686 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.687 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.687 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.687 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.687 "name": "Existed_Raid", 00:17:13.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.687 "strip_size_kb": 64, 00:17:13.687 "state": "configuring", 00:17:13.687 "raid_level": "raid5f", 00:17:13.687 "superblock": false, 00:17:13.687 "num_base_bdevs": 3, 00:17:13.687 "num_base_bdevs_discovered": 1, 00:17:13.687 "num_base_bdevs_operational": 3, 00:17:13.687 "base_bdevs_list": [ 00:17:13.687 { 00:17:13.687 "name": "BaseBdev1", 00:17:13.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.687 "is_configured": false, 00:17:13.687 "data_offset": 0, 00:17:13.687 "data_size": 0 00:17:13.687 }, 00:17:13.687 { 00:17:13.687 "name": null, 00:17:13.687 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:13.687 "is_configured": false, 00:17:13.687 "data_offset": 0, 00:17:13.687 "data_size": 65536 00:17:13.687 }, 00:17:13.687 { 00:17:13.687 "name": "BaseBdev3", 00:17:13.687 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:13.687 "is_configured": true, 00:17:13.687 "data_offset": 0, 00:17:13.687 "data_size": 65536 00:17:13.687 } 00:17:13.687 ] 00:17:13.687 }' 00:17:13.687 16:25:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.687 16:25:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.252 [2024-10-08 16:25:07.548208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.252 BaseBdev1 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.252 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.253 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.253 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.253 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.253 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.253 [ 00:17:14.253 { 00:17:14.253 "name": "BaseBdev1", 00:17:14.253 "aliases": [ 00:17:14.253 "15cf704e-f590-4f5b-bcf2-04a20f9e2542" 00:17:14.253 ], 00:17:14.253 "product_name": "Malloc disk", 00:17:14.253 "block_size": 512, 00:17:14.253 "num_blocks": 65536, 00:17:14.253 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:14.253 "assigned_rate_limits": { 00:17:14.253 "rw_ios_per_sec": 0, 00:17:14.253 "rw_mbytes_per_sec": 0, 00:17:14.253 "r_mbytes_per_sec": 0, 00:17:14.253 "w_mbytes_per_sec": 0 00:17:14.253 }, 00:17:14.253 "claimed": true, 00:17:14.253 "claim_type": "exclusive_write", 00:17:14.253 "zoned": false, 00:17:14.253 "supported_io_types": { 00:17:14.253 "read": true, 00:17:14.510 "write": true, 00:17:14.510 "unmap": true, 00:17:14.510 "flush": true, 00:17:14.510 "reset": true, 00:17:14.510 "nvme_admin": false, 00:17:14.510 "nvme_io": false, 00:17:14.510 "nvme_io_md": false, 00:17:14.510 "write_zeroes": true, 00:17:14.510 "zcopy": true, 00:17:14.510 "get_zone_info": false, 00:17:14.510 "zone_management": false, 00:17:14.510 "zone_append": false, 00:17:14.510 "compare": false, 00:17:14.510 "compare_and_write": false, 00:17:14.510 "abort": true, 00:17:14.510 "seek_hole": false, 00:17:14.510 "seek_data": false, 00:17:14.510 "copy": true, 00:17:14.510 "nvme_iov_md": false 00:17:14.510 }, 00:17:14.510 "memory_domains": [ 00:17:14.510 { 00:17:14.510 "dma_device_id": "system", 00:17:14.510 "dma_device_type": 1 00:17:14.510 }, 00:17:14.510 { 00:17:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.510 "dma_device_type": 2 00:17:14.510 } 00:17:14.510 ], 00:17:14.510 "driver_specific": {} 00:17:14.510 } 00:17:14.510 ] 00:17:14.510 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.510 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:14.510 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:14.510 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.510 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.510 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.511 "name": "Existed_Raid", 00:17:14.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.511 "strip_size_kb": 64, 00:17:14.511 "state": "configuring", 00:17:14.511 "raid_level": "raid5f", 00:17:14.511 "superblock": false, 00:17:14.511 "num_base_bdevs": 3, 00:17:14.511 "num_base_bdevs_discovered": 2, 00:17:14.511 "num_base_bdevs_operational": 3, 00:17:14.511 "base_bdevs_list": [ 00:17:14.511 { 00:17:14.511 "name": "BaseBdev1", 00:17:14.511 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:14.511 "is_configured": true, 00:17:14.511 "data_offset": 0, 00:17:14.511 "data_size": 65536 00:17:14.511 }, 00:17:14.511 { 00:17:14.511 "name": null, 00:17:14.511 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:14.511 "is_configured": false, 00:17:14.511 "data_offset": 0, 00:17:14.511 "data_size": 65536 00:17:14.511 }, 00:17:14.511 { 00:17:14.511 "name": "BaseBdev3", 00:17:14.511 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:14.511 "is_configured": true, 00:17:14.511 "data_offset": 0, 00:17:14.511 "data_size": 65536 00:17:14.511 } 00:17:14.511 ] 00:17:14.511 }' 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.511 16:25:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.075 [2024-10-08 16:25:08.152466] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.075 "name": "Existed_Raid", 00:17:15.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.075 "strip_size_kb": 64, 00:17:15.075 "state": "configuring", 00:17:15.075 "raid_level": "raid5f", 00:17:15.075 "superblock": false, 00:17:15.075 "num_base_bdevs": 3, 00:17:15.075 "num_base_bdevs_discovered": 1, 00:17:15.075 "num_base_bdevs_operational": 3, 00:17:15.075 "base_bdevs_list": [ 00:17:15.075 { 00:17:15.075 "name": "BaseBdev1", 00:17:15.075 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:15.075 "is_configured": true, 00:17:15.075 "data_offset": 0, 00:17:15.075 "data_size": 65536 00:17:15.075 }, 00:17:15.075 { 00:17:15.075 "name": null, 00:17:15.075 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:15.075 "is_configured": false, 00:17:15.075 "data_offset": 0, 00:17:15.075 "data_size": 65536 00:17:15.075 }, 00:17:15.075 { 00:17:15.075 "name": null, 00:17:15.075 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:15.075 "is_configured": false, 00:17:15.075 "data_offset": 0, 00:17:15.075 "data_size": 65536 00:17:15.075 } 00:17:15.075 ] 00:17:15.075 }' 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.075 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 [2024-10-08 16:25:08.728604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.640 "name": "Existed_Raid", 00:17:15.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.640 "strip_size_kb": 64, 00:17:15.640 "state": "configuring", 00:17:15.640 "raid_level": "raid5f", 00:17:15.640 "superblock": false, 00:17:15.640 "num_base_bdevs": 3, 00:17:15.640 "num_base_bdevs_discovered": 2, 00:17:15.640 "num_base_bdevs_operational": 3, 00:17:15.640 "base_bdevs_list": [ 00:17:15.640 { 00:17:15.640 "name": "BaseBdev1", 00:17:15.640 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:15.640 "is_configured": true, 00:17:15.640 "data_offset": 0, 00:17:15.640 "data_size": 65536 00:17:15.640 }, 00:17:15.640 { 00:17:15.640 "name": null, 00:17:15.640 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:15.640 "is_configured": false, 00:17:15.640 "data_offset": 0, 00:17:15.640 "data_size": 65536 00:17:15.640 }, 00:17:15.640 { 00:17:15.640 "name": "BaseBdev3", 00:17:15.640 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:15.640 "is_configured": true, 00:17:15.640 "data_offset": 0, 00:17:15.640 "data_size": 65536 00:17:15.640 } 00:17:15.640 ] 00:17:15.640 }' 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.640 16:25:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 [2024-10-08 16:25:09.292808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.208 "name": "Existed_Raid", 00:17:16.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.208 "strip_size_kb": 64, 00:17:16.208 "state": "configuring", 00:17:16.208 "raid_level": "raid5f", 00:17:16.208 "superblock": false, 00:17:16.208 "num_base_bdevs": 3, 00:17:16.208 "num_base_bdevs_discovered": 1, 00:17:16.208 "num_base_bdevs_operational": 3, 00:17:16.208 "base_bdevs_list": [ 00:17:16.208 { 00:17:16.208 "name": null, 00:17:16.208 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:16.208 "is_configured": false, 00:17:16.208 "data_offset": 0, 00:17:16.208 "data_size": 65536 00:17:16.208 }, 00:17:16.208 { 00:17:16.208 "name": null, 00:17:16.208 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:16.208 "is_configured": false, 00:17:16.208 "data_offset": 0, 00:17:16.208 "data_size": 65536 00:17:16.208 }, 00:17:16.208 { 00:17:16.208 "name": "BaseBdev3", 00:17:16.208 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:16.208 "is_configured": true, 00:17:16.208 "data_offset": 0, 00:17:16.208 "data_size": 65536 00:17:16.208 } 00:17:16.208 ] 00:17:16.208 }' 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.208 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.775 [2024-10-08 16:25:09.959705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.775 16:25:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.775 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.775 "name": "Existed_Raid", 00:17:16.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.775 "strip_size_kb": 64, 00:17:16.775 "state": "configuring", 00:17:16.775 "raid_level": "raid5f", 00:17:16.775 "superblock": false, 00:17:16.775 "num_base_bdevs": 3, 00:17:16.775 "num_base_bdevs_discovered": 2, 00:17:16.775 "num_base_bdevs_operational": 3, 00:17:16.775 "base_bdevs_list": [ 00:17:16.775 { 00:17:16.775 "name": null, 00:17:16.775 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:16.776 "is_configured": false, 00:17:16.776 "data_offset": 0, 00:17:16.776 "data_size": 65536 00:17:16.776 }, 00:17:16.776 { 00:17:16.776 "name": "BaseBdev2", 00:17:16.776 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:16.776 "is_configured": true, 00:17:16.776 "data_offset": 0, 00:17:16.776 "data_size": 65536 00:17:16.776 }, 00:17:16.776 { 00:17:16.776 "name": "BaseBdev3", 00:17:16.776 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:16.776 "is_configured": true, 00:17:16.776 "data_offset": 0, 00:17:16.776 "data_size": 65536 00:17:16.776 } 00:17:16.776 ] 00:17:16.776 }' 00:17:16.776 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.776 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15cf704e-f590-4f5b-bcf2-04a20f9e2542 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.343 [2024-10-08 16:25:10.634597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:17.343 [2024-10-08 16:25:10.634662] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:17.343 [2024-10-08 16:25:10.634679] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:17.343 [2024-10-08 16:25:10.635015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:17.343 [2024-10-08 16:25:10.640216] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:17.343 [2024-10-08 16:25:10.640242] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:17.343 [2024-10-08 16:25:10.640630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.343 NewBaseBdev 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.343 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.343 [ 00:17:17.343 { 00:17:17.343 "name": "NewBaseBdev", 00:17:17.343 "aliases": [ 00:17:17.343 "15cf704e-f590-4f5b-bcf2-04a20f9e2542" 00:17:17.343 ], 00:17:17.343 "product_name": "Malloc disk", 00:17:17.343 "block_size": 512, 00:17:17.343 "num_blocks": 65536, 00:17:17.343 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:17.343 "assigned_rate_limits": { 00:17:17.343 "rw_ios_per_sec": 0, 00:17:17.343 "rw_mbytes_per_sec": 0, 00:17:17.343 "r_mbytes_per_sec": 0, 00:17:17.343 "w_mbytes_per_sec": 0 00:17:17.343 }, 00:17:17.343 "claimed": true, 00:17:17.343 "claim_type": "exclusive_write", 00:17:17.343 "zoned": false, 00:17:17.343 "supported_io_types": { 00:17:17.343 "read": true, 00:17:17.343 "write": true, 00:17:17.343 "unmap": true, 00:17:17.602 "flush": true, 00:17:17.602 "reset": true, 00:17:17.602 "nvme_admin": false, 00:17:17.602 "nvme_io": false, 00:17:17.602 "nvme_io_md": false, 00:17:17.602 "write_zeroes": true, 00:17:17.602 "zcopy": true, 00:17:17.602 "get_zone_info": false, 00:17:17.602 "zone_management": false, 00:17:17.602 "zone_append": false, 00:17:17.602 "compare": false, 00:17:17.602 "compare_and_write": false, 00:17:17.602 "abort": true, 00:17:17.602 "seek_hole": false, 00:17:17.602 "seek_data": false, 00:17:17.602 "copy": true, 00:17:17.602 "nvme_iov_md": false 00:17:17.602 }, 00:17:17.602 "memory_domains": [ 00:17:17.602 { 00:17:17.602 "dma_device_id": "system", 00:17:17.602 "dma_device_type": 1 00:17:17.602 }, 00:17:17.602 { 00:17:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.602 "dma_device_type": 2 00:17:17.602 } 00:17:17.602 ], 00:17:17.602 "driver_specific": {} 00:17:17.602 } 00:17:17.602 ] 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.602 "name": "Existed_Raid", 00:17:17.602 "uuid": "3c09c0b3-f08a-4fdb-8b53-6e090c51e8a8", 00:17:17.602 "strip_size_kb": 64, 00:17:17.602 "state": "online", 00:17:17.602 "raid_level": "raid5f", 00:17:17.602 "superblock": false, 00:17:17.602 "num_base_bdevs": 3, 00:17:17.602 "num_base_bdevs_discovered": 3, 00:17:17.602 "num_base_bdevs_operational": 3, 00:17:17.602 "base_bdevs_list": [ 00:17:17.602 { 00:17:17.602 "name": "NewBaseBdev", 00:17:17.602 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:17.602 "is_configured": true, 00:17:17.602 "data_offset": 0, 00:17:17.602 "data_size": 65536 00:17:17.602 }, 00:17:17.602 { 00:17:17.602 "name": "BaseBdev2", 00:17:17.602 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:17.602 "is_configured": true, 00:17:17.602 "data_offset": 0, 00:17:17.602 "data_size": 65536 00:17:17.602 }, 00:17:17.602 { 00:17:17.602 "name": "BaseBdev3", 00:17:17.602 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:17.602 "is_configured": true, 00:17:17.602 "data_offset": 0, 00:17:17.602 "data_size": 65536 00:17:17.602 } 00:17:17.602 ] 00:17:17.602 }' 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.602 16:25:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.169 [2024-10-08 16:25:11.210810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.169 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:18.169 "name": "Existed_Raid", 00:17:18.169 "aliases": [ 00:17:18.169 "3c09c0b3-f08a-4fdb-8b53-6e090c51e8a8" 00:17:18.169 ], 00:17:18.169 "product_name": "Raid Volume", 00:17:18.169 "block_size": 512, 00:17:18.169 "num_blocks": 131072, 00:17:18.169 "uuid": "3c09c0b3-f08a-4fdb-8b53-6e090c51e8a8", 00:17:18.169 "assigned_rate_limits": { 00:17:18.169 "rw_ios_per_sec": 0, 00:17:18.169 "rw_mbytes_per_sec": 0, 00:17:18.169 "r_mbytes_per_sec": 0, 00:17:18.169 "w_mbytes_per_sec": 0 00:17:18.169 }, 00:17:18.169 "claimed": false, 00:17:18.169 "zoned": false, 00:17:18.169 "supported_io_types": { 00:17:18.169 "read": true, 00:17:18.169 "write": true, 00:17:18.169 "unmap": false, 00:17:18.169 "flush": false, 00:17:18.169 "reset": true, 00:17:18.169 "nvme_admin": false, 00:17:18.169 "nvme_io": false, 00:17:18.169 "nvme_io_md": false, 00:17:18.169 "write_zeroes": true, 00:17:18.169 "zcopy": false, 00:17:18.169 "get_zone_info": false, 00:17:18.169 "zone_management": false, 00:17:18.169 "zone_append": false, 00:17:18.169 "compare": false, 00:17:18.169 "compare_and_write": false, 00:17:18.169 "abort": false, 00:17:18.169 "seek_hole": false, 00:17:18.169 "seek_data": false, 00:17:18.169 "copy": false, 00:17:18.169 "nvme_iov_md": false 00:17:18.169 }, 00:17:18.169 "driver_specific": { 00:17:18.169 "raid": { 00:17:18.169 "uuid": "3c09c0b3-f08a-4fdb-8b53-6e090c51e8a8", 00:17:18.169 "strip_size_kb": 64, 00:17:18.169 "state": "online", 00:17:18.169 "raid_level": "raid5f", 00:17:18.169 "superblock": false, 00:17:18.169 "num_base_bdevs": 3, 00:17:18.169 "num_base_bdevs_discovered": 3, 00:17:18.169 "num_base_bdevs_operational": 3, 00:17:18.169 "base_bdevs_list": [ 00:17:18.169 { 00:17:18.169 "name": "NewBaseBdev", 00:17:18.169 "uuid": "15cf704e-f590-4f5b-bcf2-04a20f9e2542", 00:17:18.170 "is_configured": true, 00:17:18.170 "data_offset": 0, 00:17:18.170 "data_size": 65536 00:17:18.170 }, 00:17:18.170 { 00:17:18.170 "name": "BaseBdev2", 00:17:18.170 "uuid": "640a9d97-a7f1-40da-bad2-86b8c63a6c3e", 00:17:18.170 "is_configured": true, 00:17:18.170 "data_offset": 0, 00:17:18.170 "data_size": 65536 00:17:18.170 }, 00:17:18.170 { 00:17:18.170 "name": "BaseBdev3", 00:17:18.170 "uuid": "183c5448-7c4d-4ebb-945b-96d57767873a", 00:17:18.170 "is_configured": true, 00:17:18.170 "data_offset": 0, 00:17:18.170 "data_size": 65536 00:17:18.170 } 00:17:18.170 ] 00:17:18.170 } 00:17:18.170 } 00:17:18.170 }' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:18.170 BaseBdev2 00:17:18.170 BaseBdev3' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.170 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.430 [2024-10-08 16:25:11.534559] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.430 [2024-10-08 16:25:11.534592] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.430 [2024-10-08 16:25:11.534686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.430 [2024-10-08 16:25:11.535052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.430 [2024-10-08 16:25:11.535087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80590 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80590 ']' 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80590 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80590 00:17:18.430 killing process with pid 80590 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80590' 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80590 00:17:18.430 [2024-10-08 16:25:11.585138] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.430 16:25:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80590 00:17:18.688 [2024-10-08 16:25:11.851758] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.063 16:25:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:20.063 00:17:20.063 real 0m12.172s 00:17:20.063 user 0m19.980s 00:17:20.063 sys 0m1.843s 00:17:20.063 ************************************ 00:17:20.063 END TEST raid5f_state_function_test 00:17:20.063 ************************************ 00:17:20.063 16:25:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.063 16:25:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.064 16:25:13 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:20.064 16:25:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:20.064 16:25:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.064 16:25:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.064 ************************************ 00:17:20.064 START TEST raid5f_state_function_test_sb 00:17:20.064 ************************************ 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81231 00:17:20.064 Process raid pid: 81231 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81231' 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81231 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81231 ']' 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.064 16:25:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.064 [2024-10-08 16:25:13.221190] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:17:20.064 [2024-10-08 16:25:13.221366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.322 [2024-10-08 16:25:13.399774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.582 [2024-10-08 16:25:13.649952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.582 [2024-10-08 16:25:13.858086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.582 [2024-10-08 16:25:13.858157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.148 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.148 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:21.148 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:21.148 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.148 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.148 [2024-10-08 16:25:14.192788] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.148 [2024-10-08 16:25:14.192854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.149 [2024-10-08 16:25:14.192872] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.149 [2024-10-08 16:25:14.192891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.149 [2024-10-08 16:25:14.192902] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.149 [2024-10-08 16:25:14.192917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.149 "name": "Existed_Raid", 00:17:21.149 "uuid": "3a8e3ed3-3e64-491d-a538-7ee8d7f0ab90", 00:17:21.149 "strip_size_kb": 64, 00:17:21.149 "state": "configuring", 00:17:21.149 "raid_level": "raid5f", 00:17:21.149 "superblock": true, 00:17:21.149 "num_base_bdevs": 3, 00:17:21.149 "num_base_bdevs_discovered": 0, 00:17:21.149 "num_base_bdevs_operational": 3, 00:17:21.149 "base_bdevs_list": [ 00:17:21.149 { 00:17:21.149 "name": "BaseBdev1", 00:17:21.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.149 "is_configured": false, 00:17:21.149 "data_offset": 0, 00:17:21.149 "data_size": 0 00:17:21.149 }, 00:17:21.149 { 00:17:21.149 "name": "BaseBdev2", 00:17:21.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.149 "is_configured": false, 00:17:21.149 "data_offset": 0, 00:17:21.149 "data_size": 0 00:17:21.149 }, 00:17:21.149 { 00:17:21.149 "name": "BaseBdev3", 00:17:21.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.149 "is_configured": false, 00:17:21.149 "data_offset": 0, 00:17:21.149 "data_size": 0 00:17:21.149 } 00:17:21.149 ] 00:17:21.149 }' 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.149 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.407 [2024-10-08 16:25:14.664819] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.407 [2024-10-08 16:25:14.664873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.407 [2024-10-08 16:25:14.672841] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.407 [2024-10-08 16:25:14.672961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.407 [2024-10-08 16:25:14.672976] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.407 [2024-10-08 16:25:14.672991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.407 [2024-10-08 16:25:14.673001] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.407 [2024-10-08 16:25:14.673016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.407 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.408 [2024-10-08 16:25:14.725632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.408 BaseBdev1 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.408 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.666 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.667 [ 00:17:21.667 { 00:17:21.667 "name": "BaseBdev1", 00:17:21.667 "aliases": [ 00:17:21.667 "8916adcb-f13a-4a0c-a490-5886ae949e5b" 00:17:21.667 ], 00:17:21.667 "product_name": "Malloc disk", 00:17:21.667 "block_size": 512, 00:17:21.667 "num_blocks": 65536, 00:17:21.667 "uuid": "8916adcb-f13a-4a0c-a490-5886ae949e5b", 00:17:21.667 "assigned_rate_limits": { 00:17:21.667 "rw_ios_per_sec": 0, 00:17:21.667 "rw_mbytes_per_sec": 0, 00:17:21.667 "r_mbytes_per_sec": 0, 00:17:21.667 "w_mbytes_per_sec": 0 00:17:21.667 }, 00:17:21.667 "claimed": true, 00:17:21.667 "claim_type": "exclusive_write", 00:17:21.667 "zoned": false, 00:17:21.667 "supported_io_types": { 00:17:21.667 "read": true, 00:17:21.667 "write": true, 00:17:21.667 "unmap": true, 00:17:21.667 "flush": true, 00:17:21.667 "reset": true, 00:17:21.667 "nvme_admin": false, 00:17:21.667 "nvme_io": false, 00:17:21.667 "nvme_io_md": false, 00:17:21.667 "write_zeroes": true, 00:17:21.667 "zcopy": true, 00:17:21.667 "get_zone_info": false, 00:17:21.667 "zone_management": false, 00:17:21.667 "zone_append": false, 00:17:21.667 "compare": false, 00:17:21.667 "compare_and_write": false, 00:17:21.667 "abort": true, 00:17:21.667 "seek_hole": false, 00:17:21.667 "seek_data": false, 00:17:21.667 "copy": true, 00:17:21.667 "nvme_iov_md": false 00:17:21.667 }, 00:17:21.667 "memory_domains": [ 00:17:21.667 { 00:17:21.667 "dma_device_id": "system", 00:17:21.667 "dma_device_type": 1 00:17:21.667 }, 00:17:21.667 { 00:17:21.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.667 "dma_device_type": 2 00:17:21.667 } 00:17:21.667 ], 00:17:21.667 "driver_specific": {} 00:17:21.667 } 00:17:21.667 ] 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.667 "name": "Existed_Raid", 00:17:21.667 "uuid": "0d33d3e2-87cc-4564-86ca-8bf3b0187022", 00:17:21.667 "strip_size_kb": 64, 00:17:21.667 "state": "configuring", 00:17:21.667 "raid_level": "raid5f", 00:17:21.667 "superblock": true, 00:17:21.667 "num_base_bdevs": 3, 00:17:21.667 "num_base_bdevs_discovered": 1, 00:17:21.667 "num_base_bdevs_operational": 3, 00:17:21.667 "base_bdevs_list": [ 00:17:21.667 { 00:17:21.667 "name": "BaseBdev1", 00:17:21.667 "uuid": "8916adcb-f13a-4a0c-a490-5886ae949e5b", 00:17:21.667 "is_configured": true, 00:17:21.667 "data_offset": 2048, 00:17:21.667 "data_size": 63488 00:17:21.667 }, 00:17:21.667 { 00:17:21.667 "name": "BaseBdev2", 00:17:21.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.667 "is_configured": false, 00:17:21.667 "data_offset": 0, 00:17:21.667 "data_size": 0 00:17:21.667 }, 00:17:21.667 { 00:17:21.667 "name": "BaseBdev3", 00:17:21.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.667 "is_configured": false, 00:17:21.667 "data_offset": 0, 00:17:21.667 "data_size": 0 00:17:21.667 } 00:17:21.667 ] 00:17:21.667 }' 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.667 16:25:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.233 [2024-10-08 16:25:15.281869] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.233 [2024-10-08 16:25:15.281937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.233 [2024-10-08 16:25:15.293930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.233 [2024-10-08 16:25:15.296446] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.233 [2024-10-08 16:25:15.296502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.233 [2024-10-08 16:25:15.296533] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.233 [2024-10-08 16:25:15.296553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.233 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.234 "name": "Existed_Raid", 00:17:22.234 "uuid": "c26c7065-92d6-4637-80bf-385fcd3e46c2", 00:17:22.234 "strip_size_kb": 64, 00:17:22.234 "state": "configuring", 00:17:22.234 "raid_level": "raid5f", 00:17:22.234 "superblock": true, 00:17:22.234 "num_base_bdevs": 3, 00:17:22.234 "num_base_bdevs_discovered": 1, 00:17:22.234 "num_base_bdevs_operational": 3, 00:17:22.234 "base_bdevs_list": [ 00:17:22.234 { 00:17:22.234 "name": "BaseBdev1", 00:17:22.234 "uuid": "8916adcb-f13a-4a0c-a490-5886ae949e5b", 00:17:22.234 "is_configured": true, 00:17:22.234 "data_offset": 2048, 00:17:22.234 "data_size": 63488 00:17:22.234 }, 00:17:22.234 { 00:17:22.234 "name": "BaseBdev2", 00:17:22.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.234 "is_configured": false, 00:17:22.234 "data_offset": 0, 00:17:22.234 "data_size": 0 00:17:22.234 }, 00:17:22.234 { 00:17:22.234 "name": "BaseBdev3", 00:17:22.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.234 "is_configured": false, 00:17:22.234 "data_offset": 0, 00:17:22.234 "data_size": 0 00:17:22.234 } 00:17:22.234 ] 00:17:22.234 }' 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.234 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.492 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:22.492 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.492 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.751 [2024-10-08 16:25:15.852235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.751 BaseBdev2 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.751 [ 00:17:22.751 { 00:17:22.751 "name": "BaseBdev2", 00:17:22.751 "aliases": [ 00:17:22.751 "f125ab5b-739c-4d6b-ab26-441793dfa443" 00:17:22.751 ], 00:17:22.751 "product_name": "Malloc disk", 00:17:22.751 "block_size": 512, 00:17:22.751 "num_blocks": 65536, 00:17:22.751 "uuid": "f125ab5b-739c-4d6b-ab26-441793dfa443", 00:17:22.751 "assigned_rate_limits": { 00:17:22.751 "rw_ios_per_sec": 0, 00:17:22.751 "rw_mbytes_per_sec": 0, 00:17:22.751 "r_mbytes_per_sec": 0, 00:17:22.751 "w_mbytes_per_sec": 0 00:17:22.751 }, 00:17:22.751 "claimed": true, 00:17:22.751 "claim_type": "exclusive_write", 00:17:22.751 "zoned": false, 00:17:22.751 "supported_io_types": { 00:17:22.751 "read": true, 00:17:22.751 "write": true, 00:17:22.751 "unmap": true, 00:17:22.751 "flush": true, 00:17:22.751 "reset": true, 00:17:22.751 "nvme_admin": false, 00:17:22.751 "nvme_io": false, 00:17:22.751 "nvme_io_md": false, 00:17:22.751 "write_zeroes": true, 00:17:22.751 "zcopy": true, 00:17:22.751 "get_zone_info": false, 00:17:22.751 "zone_management": false, 00:17:22.751 "zone_append": false, 00:17:22.751 "compare": false, 00:17:22.751 "compare_and_write": false, 00:17:22.751 "abort": true, 00:17:22.751 "seek_hole": false, 00:17:22.751 "seek_data": false, 00:17:22.751 "copy": true, 00:17:22.751 "nvme_iov_md": false 00:17:22.751 }, 00:17:22.751 "memory_domains": [ 00:17:22.751 { 00:17:22.751 "dma_device_id": "system", 00:17:22.751 "dma_device_type": 1 00:17:22.751 }, 00:17:22.751 { 00:17:22.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.751 "dma_device_type": 2 00:17:22.751 } 00:17:22.751 ], 00:17:22.751 "driver_specific": {} 00:17:22.751 } 00:17:22.751 ] 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.751 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.752 "name": "Existed_Raid", 00:17:22.752 "uuid": "c26c7065-92d6-4637-80bf-385fcd3e46c2", 00:17:22.752 "strip_size_kb": 64, 00:17:22.752 "state": "configuring", 00:17:22.752 "raid_level": "raid5f", 00:17:22.752 "superblock": true, 00:17:22.752 "num_base_bdevs": 3, 00:17:22.752 "num_base_bdevs_discovered": 2, 00:17:22.752 "num_base_bdevs_operational": 3, 00:17:22.752 "base_bdevs_list": [ 00:17:22.752 { 00:17:22.752 "name": "BaseBdev1", 00:17:22.752 "uuid": "8916adcb-f13a-4a0c-a490-5886ae949e5b", 00:17:22.752 "is_configured": true, 00:17:22.752 "data_offset": 2048, 00:17:22.752 "data_size": 63488 00:17:22.752 }, 00:17:22.752 { 00:17:22.752 "name": "BaseBdev2", 00:17:22.752 "uuid": "f125ab5b-739c-4d6b-ab26-441793dfa443", 00:17:22.752 "is_configured": true, 00:17:22.752 "data_offset": 2048, 00:17:22.752 "data_size": 63488 00:17:22.752 }, 00:17:22.752 { 00:17:22.752 "name": "BaseBdev3", 00:17:22.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.752 "is_configured": false, 00:17:22.752 "data_offset": 0, 00:17:22.752 "data_size": 0 00:17:22.752 } 00:17:22.752 ] 00:17:22.752 }' 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.752 16:25:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.319 [2024-10-08 16:25:16.448450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.319 [2024-10-08 16:25:16.448859] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:23.319 [2024-10-08 16:25:16.448901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:23.319 [2024-10-08 16:25:16.449253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:23.319 BaseBdev3 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.319 [2024-10-08 16:25:16.454729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:23.319 [2024-10-08 16:25:16.454762] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:23.319 [2024-10-08 16:25:16.455127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.319 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 [ 00:17:23.320 { 00:17:23.320 "name": "BaseBdev3", 00:17:23.320 "aliases": [ 00:17:23.320 "9bb4144c-2e64-436b-93c4-27b433df4267" 00:17:23.320 ], 00:17:23.320 "product_name": "Malloc disk", 00:17:23.320 "block_size": 512, 00:17:23.320 "num_blocks": 65536, 00:17:23.320 "uuid": "9bb4144c-2e64-436b-93c4-27b433df4267", 00:17:23.320 "assigned_rate_limits": { 00:17:23.320 "rw_ios_per_sec": 0, 00:17:23.320 "rw_mbytes_per_sec": 0, 00:17:23.320 "r_mbytes_per_sec": 0, 00:17:23.320 "w_mbytes_per_sec": 0 00:17:23.320 }, 00:17:23.320 "claimed": true, 00:17:23.320 "claim_type": "exclusive_write", 00:17:23.320 "zoned": false, 00:17:23.320 "supported_io_types": { 00:17:23.320 "read": true, 00:17:23.320 "write": true, 00:17:23.320 "unmap": true, 00:17:23.320 "flush": true, 00:17:23.320 "reset": true, 00:17:23.320 "nvme_admin": false, 00:17:23.320 "nvme_io": false, 00:17:23.320 "nvme_io_md": false, 00:17:23.320 "write_zeroes": true, 00:17:23.320 "zcopy": true, 00:17:23.320 "get_zone_info": false, 00:17:23.320 "zone_management": false, 00:17:23.320 "zone_append": false, 00:17:23.320 "compare": false, 00:17:23.320 "compare_and_write": false, 00:17:23.320 "abort": true, 00:17:23.320 "seek_hole": false, 00:17:23.320 "seek_data": false, 00:17:23.320 "copy": true, 00:17:23.320 "nvme_iov_md": false 00:17:23.320 }, 00:17:23.320 "memory_domains": [ 00:17:23.320 { 00:17:23.320 "dma_device_id": "system", 00:17:23.320 "dma_device_type": 1 00:17:23.320 }, 00:17:23.320 { 00:17:23.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.320 "dma_device_type": 2 00:17:23.320 } 00:17:23.320 ], 00:17:23.320 "driver_specific": {} 00:17:23.320 } 00:17:23.320 ] 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.320 "name": "Existed_Raid", 00:17:23.320 "uuid": "c26c7065-92d6-4637-80bf-385fcd3e46c2", 00:17:23.320 "strip_size_kb": 64, 00:17:23.320 "state": "online", 00:17:23.320 "raid_level": "raid5f", 00:17:23.320 "superblock": true, 00:17:23.320 "num_base_bdevs": 3, 00:17:23.320 "num_base_bdevs_discovered": 3, 00:17:23.320 "num_base_bdevs_operational": 3, 00:17:23.320 "base_bdevs_list": [ 00:17:23.320 { 00:17:23.320 "name": "BaseBdev1", 00:17:23.320 "uuid": "8916adcb-f13a-4a0c-a490-5886ae949e5b", 00:17:23.320 "is_configured": true, 00:17:23.320 "data_offset": 2048, 00:17:23.320 "data_size": 63488 00:17:23.320 }, 00:17:23.320 { 00:17:23.320 "name": "BaseBdev2", 00:17:23.320 "uuid": "f125ab5b-739c-4d6b-ab26-441793dfa443", 00:17:23.320 "is_configured": true, 00:17:23.320 "data_offset": 2048, 00:17:23.320 "data_size": 63488 00:17:23.320 }, 00:17:23.320 { 00:17:23.320 "name": "BaseBdev3", 00:17:23.320 "uuid": "9bb4144c-2e64-436b-93c4-27b433df4267", 00:17:23.320 "is_configured": true, 00:17:23.320 "data_offset": 2048, 00:17:23.320 "data_size": 63488 00:17:23.320 } 00:17:23.320 ] 00:17:23.320 }' 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.320 16:25:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.887 [2024-10-08 16:25:17.037338] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:23.887 "name": "Existed_Raid", 00:17:23.887 "aliases": [ 00:17:23.887 "c26c7065-92d6-4637-80bf-385fcd3e46c2" 00:17:23.887 ], 00:17:23.887 "product_name": "Raid Volume", 00:17:23.887 "block_size": 512, 00:17:23.887 "num_blocks": 126976, 00:17:23.887 "uuid": "c26c7065-92d6-4637-80bf-385fcd3e46c2", 00:17:23.887 "assigned_rate_limits": { 00:17:23.887 "rw_ios_per_sec": 0, 00:17:23.887 "rw_mbytes_per_sec": 0, 00:17:23.887 "r_mbytes_per_sec": 0, 00:17:23.887 "w_mbytes_per_sec": 0 00:17:23.887 }, 00:17:23.887 "claimed": false, 00:17:23.887 "zoned": false, 00:17:23.887 "supported_io_types": { 00:17:23.887 "read": true, 00:17:23.887 "write": true, 00:17:23.887 "unmap": false, 00:17:23.887 "flush": false, 00:17:23.887 "reset": true, 00:17:23.887 "nvme_admin": false, 00:17:23.887 "nvme_io": false, 00:17:23.887 "nvme_io_md": false, 00:17:23.887 "write_zeroes": true, 00:17:23.887 "zcopy": false, 00:17:23.887 "get_zone_info": false, 00:17:23.887 "zone_management": false, 00:17:23.887 "zone_append": false, 00:17:23.887 "compare": false, 00:17:23.887 "compare_and_write": false, 00:17:23.887 "abort": false, 00:17:23.887 "seek_hole": false, 00:17:23.887 "seek_data": false, 00:17:23.887 "copy": false, 00:17:23.887 "nvme_iov_md": false 00:17:23.887 }, 00:17:23.887 "driver_specific": { 00:17:23.887 "raid": { 00:17:23.887 "uuid": "c26c7065-92d6-4637-80bf-385fcd3e46c2", 00:17:23.887 "strip_size_kb": 64, 00:17:23.887 "state": "online", 00:17:23.887 "raid_level": "raid5f", 00:17:23.887 "superblock": true, 00:17:23.887 "num_base_bdevs": 3, 00:17:23.887 "num_base_bdevs_discovered": 3, 00:17:23.887 "num_base_bdevs_operational": 3, 00:17:23.887 "base_bdevs_list": [ 00:17:23.887 { 00:17:23.887 "name": "BaseBdev1", 00:17:23.887 "uuid": "8916adcb-f13a-4a0c-a490-5886ae949e5b", 00:17:23.887 "is_configured": true, 00:17:23.887 "data_offset": 2048, 00:17:23.887 "data_size": 63488 00:17:23.887 }, 00:17:23.887 { 00:17:23.887 "name": "BaseBdev2", 00:17:23.887 "uuid": "f125ab5b-739c-4d6b-ab26-441793dfa443", 00:17:23.887 "is_configured": true, 00:17:23.887 "data_offset": 2048, 00:17:23.887 "data_size": 63488 00:17:23.887 }, 00:17:23.887 { 00:17:23.887 "name": "BaseBdev3", 00:17:23.887 "uuid": "9bb4144c-2e64-436b-93c4-27b433df4267", 00:17:23.887 "is_configured": true, 00:17:23.887 "data_offset": 2048, 00:17:23.887 "data_size": 63488 00:17:23.887 } 00:17:23.887 ] 00:17:23.887 } 00:17:23.887 } 00:17:23.887 }' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:23.887 BaseBdev2 00:17:23.887 BaseBdev3' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.887 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.146 [2024-10-08 16:25:17.361244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.146 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.405 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.405 "name": "Existed_Raid", 00:17:24.405 "uuid": "c26c7065-92d6-4637-80bf-385fcd3e46c2", 00:17:24.405 "strip_size_kb": 64, 00:17:24.405 "state": "online", 00:17:24.405 "raid_level": "raid5f", 00:17:24.405 "superblock": true, 00:17:24.405 "num_base_bdevs": 3, 00:17:24.405 "num_base_bdevs_discovered": 2, 00:17:24.405 "num_base_bdevs_operational": 2, 00:17:24.405 "base_bdevs_list": [ 00:17:24.405 { 00:17:24.405 "name": null, 00:17:24.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.405 "is_configured": false, 00:17:24.405 "data_offset": 0, 00:17:24.405 "data_size": 63488 00:17:24.405 }, 00:17:24.405 { 00:17:24.405 "name": "BaseBdev2", 00:17:24.405 "uuid": "f125ab5b-739c-4d6b-ab26-441793dfa443", 00:17:24.405 "is_configured": true, 00:17:24.405 "data_offset": 2048, 00:17:24.405 "data_size": 63488 00:17:24.405 }, 00:17:24.405 { 00:17:24.405 "name": "BaseBdev3", 00:17:24.405 "uuid": "9bb4144c-2e64-436b-93c4-27b433df4267", 00:17:24.405 "is_configured": true, 00:17:24.405 "data_offset": 2048, 00:17:24.405 "data_size": 63488 00:17:24.405 } 00:17:24.405 ] 00:17:24.405 }' 00:17:24.405 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.405 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.673 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:24.673 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.674 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.674 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.674 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.674 16:25:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:24.674 16:25:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.957 [2024-10-08 16:25:18.029911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:24.957 [2024-10-08 16:25:18.030137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.957 [2024-10-08 16:25:18.110453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.957 [2024-10-08 16:25:18.170559] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:24.957 [2024-10-08 16:25:18.170634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:24.957 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.216 BaseBdev2 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.216 [ 00:17:25.216 { 00:17:25.216 "name": "BaseBdev2", 00:17:25.216 "aliases": [ 00:17:25.216 "608fa060-6ceb-45c1-b204-ce9068aab7fa" 00:17:25.216 ], 00:17:25.216 "product_name": "Malloc disk", 00:17:25.216 "block_size": 512, 00:17:25.216 "num_blocks": 65536, 00:17:25.216 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:25.216 "assigned_rate_limits": { 00:17:25.216 "rw_ios_per_sec": 0, 00:17:25.216 "rw_mbytes_per_sec": 0, 00:17:25.216 "r_mbytes_per_sec": 0, 00:17:25.216 "w_mbytes_per_sec": 0 00:17:25.216 }, 00:17:25.216 "claimed": false, 00:17:25.216 "zoned": false, 00:17:25.216 "supported_io_types": { 00:17:25.216 "read": true, 00:17:25.216 "write": true, 00:17:25.216 "unmap": true, 00:17:25.216 "flush": true, 00:17:25.216 "reset": true, 00:17:25.216 "nvme_admin": false, 00:17:25.216 "nvme_io": false, 00:17:25.216 "nvme_io_md": false, 00:17:25.216 "write_zeroes": true, 00:17:25.216 "zcopy": true, 00:17:25.216 "get_zone_info": false, 00:17:25.216 "zone_management": false, 00:17:25.216 "zone_append": false, 00:17:25.216 "compare": false, 00:17:25.216 "compare_and_write": false, 00:17:25.216 "abort": true, 00:17:25.216 "seek_hole": false, 00:17:25.216 "seek_data": false, 00:17:25.216 "copy": true, 00:17:25.216 "nvme_iov_md": false 00:17:25.216 }, 00:17:25.216 "memory_domains": [ 00:17:25.216 { 00:17:25.216 "dma_device_id": "system", 00:17:25.216 "dma_device_type": 1 00:17:25.216 }, 00:17:25.216 { 00:17:25.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.216 "dma_device_type": 2 00:17:25.216 } 00:17:25.216 ], 00:17:25.216 "driver_specific": {} 00:17:25.216 } 00:17:25.216 ] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.216 BaseBdev3 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.216 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.216 [ 00:17:25.216 { 00:17:25.216 "name": "BaseBdev3", 00:17:25.216 "aliases": [ 00:17:25.216 "079470b4-e513-4340-910c-f69d9f524fb3" 00:17:25.216 ], 00:17:25.216 "product_name": "Malloc disk", 00:17:25.216 "block_size": 512, 00:17:25.216 "num_blocks": 65536, 00:17:25.216 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:25.216 "assigned_rate_limits": { 00:17:25.216 "rw_ios_per_sec": 0, 00:17:25.216 "rw_mbytes_per_sec": 0, 00:17:25.216 "r_mbytes_per_sec": 0, 00:17:25.216 "w_mbytes_per_sec": 0 00:17:25.216 }, 00:17:25.216 "claimed": false, 00:17:25.216 "zoned": false, 00:17:25.216 "supported_io_types": { 00:17:25.216 "read": true, 00:17:25.216 "write": true, 00:17:25.216 "unmap": true, 00:17:25.216 "flush": true, 00:17:25.216 "reset": true, 00:17:25.216 "nvme_admin": false, 00:17:25.216 "nvme_io": false, 00:17:25.216 "nvme_io_md": false, 00:17:25.216 "write_zeroes": true, 00:17:25.216 "zcopy": true, 00:17:25.216 "get_zone_info": false, 00:17:25.216 "zone_management": false, 00:17:25.216 "zone_append": false, 00:17:25.216 "compare": false, 00:17:25.216 "compare_and_write": false, 00:17:25.216 "abort": true, 00:17:25.216 "seek_hole": false, 00:17:25.216 "seek_data": false, 00:17:25.216 "copy": true, 00:17:25.216 "nvme_iov_md": false 00:17:25.216 }, 00:17:25.216 "memory_domains": [ 00:17:25.216 { 00:17:25.216 "dma_device_id": "system", 00:17:25.216 "dma_device_type": 1 00:17:25.216 }, 00:17:25.216 { 00:17:25.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.217 "dma_device_type": 2 00:17:25.217 } 00:17:25.217 ], 00:17:25.217 "driver_specific": {} 00:17:25.217 } 00:17:25.217 ] 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.217 [2024-10-08 16:25:18.457882] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.217 [2024-10-08 16:25:18.457946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.217 [2024-10-08 16:25:18.457994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.217 [2024-10-08 16:25:18.460310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.217 "name": "Existed_Raid", 00:17:25.217 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:25.217 "strip_size_kb": 64, 00:17:25.217 "state": "configuring", 00:17:25.217 "raid_level": "raid5f", 00:17:25.217 "superblock": true, 00:17:25.217 "num_base_bdevs": 3, 00:17:25.217 "num_base_bdevs_discovered": 2, 00:17:25.217 "num_base_bdevs_operational": 3, 00:17:25.217 "base_bdevs_list": [ 00:17:25.217 { 00:17:25.217 "name": "BaseBdev1", 00:17:25.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.217 "is_configured": false, 00:17:25.217 "data_offset": 0, 00:17:25.217 "data_size": 0 00:17:25.217 }, 00:17:25.217 { 00:17:25.217 "name": "BaseBdev2", 00:17:25.217 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:25.217 "is_configured": true, 00:17:25.217 "data_offset": 2048, 00:17:25.217 "data_size": 63488 00:17:25.217 }, 00:17:25.217 { 00:17:25.217 "name": "BaseBdev3", 00:17:25.217 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:25.217 "is_configured": true, 00:17:25.217 "data_offset": 2048, 00:17:25.217 "data_size": 63488 00:17:25.217 } 00:17:25.217 ] 00:17:25.217 }' 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.217 16:25:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.784 [2024-10-08 16:25:19.042079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.784 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.784 "name": "Existed_Raid", 00:17:25.784 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:25.784 "strip_size_kb": 64, 00:17:25.784 "state": "configuring", 00:17:25.784 "raid_level": "raid5f", 00:17:25.784 "superblock": true, 00:17:25.784 "num_base_bdevs": 3, 00:17:25.784 "num_base_bdevs_discovered": 1, 00:17:25.784 "num_base_bdevs_operational": 3, 00:17:25.784 "base_bdevs_list": [ 00:17:25.784 { 00:17:25.784 "name": "BaseBdev1", 00:17:25.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.784 "is_configured": false, 00:17:25.784 "data_offset": 0, 00:17:25.784 "data_size": 0 00:17:25.784 }, 00:17:25.784 { 00:17:25.784 "name": null, 00:17:25.784 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:25.784 "is_configured": false, 00:17:25.784 "data_offset": 0, 00:17:25.784 "data_size": 63488 00:17:25.784 }, 00:17:25.784 { 00:17:25.784 "name": "BaseBdev3", 00:17:25.785 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:25.785 "is_configured": true, 00:17:25.785 "data_offset": 2048, 00:17:25.785 "data_size": 63488 00:17:25.785 } 00:17:25.785 ] 00:17:25.785 }' 00:17:25.785 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.785 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.351 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.610 [2024-10-08 16:25:19.679610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.610 BaseBdev1 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.610 [ 00:17:26.610 { 00:17:26.610 "name": "BaseBdev1", 00:17:26.610 "aliases": [ 00:17:26.610 "ce2c3d00-e807-4f37-a9cf-cea6c050f86a" 00:17:26.610 ], 00:17:26.610 "product_name": "Malloc disk", 00:17:26.610 "block_size": 512, 00:17:26.610 "num_blocks": 65536, 00:17:26.610 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:26.610 "assigned_rate_limits": { 00:17:26.610 "rw_ios_per_sec": 0, 00:17:26.610 "rw_mbytes_per_sec": 0, 00:17:26.610 "r_mbytes_per_sec": 0, 00:17:26.610 "w_mbytes_per_sec": 0 00:17:26.610 }, 00:17:26.610 "claimed": true, 00:17:26.610 "claim_type": "exclusive_write", 00:17:26.610 "zoned": false, 00:17:26.610 "supported_io_types": { 00:17:26.610 "read": true, 00:17:26.610 "write": true, 00:17:26.610 "unmap": true, 00:17:26.610 "flush": true, 00:17:26.610 "reset": true, 00:17:26.610 "nvme_admin": false, 00:17:26.610 "nvme_io": false, 00:17:26.610 "nvme_io_md": false, 00:17:26.610 "write_zeroes": true, 00:17:26.610 "zcopy": true, 00:17:26.610 "get_zone_info": false, 00:17:26.610 "zone_management": false, 00:17:26.610 "zone_append": false, 00:17:26.610 "compare": false, 00:17:26.610 "compare_and_write": false, 00:17:26.610 "abort": true, 00:17:26.610 "seek_hole": false, 00:17:26.610 "seek_data": false, 00:17:26.610 "copy": true, 00:17:26.610 "nvme_iov_md": false 00:17:26.610 }, 00:17:26.610 "memory_domains": [ 00:17:26.610 { 00:17:26.610 "dma_device_id": "system", 00:17:26.610 "dma_device_type": 1 00:17:26.610 }, 00:17:26.610 { 00:17:26.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.610 "dma_device_type": 2 00:17:26.610 } 00:17:26.610 ], 00:17:26.610 "driver_specific": {} 00:17:26.610 } 00:17:26.610 ] 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.610 "name": "Existed_Raid", 00:17:26.610 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:26.610 "strip_size_kb": 64, 00:17:26.610 "state": "configuring", 00:17:26.610 "raid_level": "raid5f", 00:17:26.610 "superblock": true, 00:17:26.610 "num_base_bdevs": 3, 00:17:26.610 "num_base_bdevs_discovered": 2, 00:17:26.610 "num_base_bdevs_operational": 3, 00:17:26.610 "base_bdevs_list": [ 00:17:26.610 { 00:17:26.610 "name": "BaseBdev1", 00:17:26.610 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:26.610 "is_configured": true, 00:17:26.610 "data_offset": 2048, 00:17:26.610 "data_size": 63488 00:17:26.610 }, 00:17:26.610 { 00:17:26.610 "name": null, 00:17:26.610 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:26.610 "is_configured": false, 00:17:26.610 "data_offset": 0, 00:17:26.610 "data_size": 63488 00:17:26.610 }, 00:17:26.610 { 00:17:26.610 "name": "BaseBdev3", 00:17:26.610 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:26.610 "is_configured": true, 00:17:26.610 "data_offset": 2048, 00:17:26.610 "data_size": 63488 00:17:26.610 } 00:17:26.610 ] 00:17:26.610 }' 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.610 16:25:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.177 [2024-10-08 16:25:20.275859] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.177 "name": "Existed_Raid", 00:17:27.177 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:27.177 "strip_size_kb": 64, 00:17:27.177 "state": "configuring", 00:17:27.177 "raid_level": "raid5f", 00:17:27.177 "superblock": true, 00:17:27.177 "num_base_bdevs": 3, 00:17:27.177 "num_base_bdevs_discovered": 1, 00:17:27.177 "num_base_bdevs_operational": 3, 00:17:27.177 "base_bdevs_list": [ 00:17:27.177 { 00:17:27.177 "name": "BaseBdev1", 00:17:27.177 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:27.177 "is_configured": true, 00:17:27.177 "data_offset": 2048, 00:17:27.177 "data_size": 63488 00:17:27.177 }, 00:17:27.177 { 00:17:27.177 "name": null, 00:17:27.177 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:27.177 "is_configured": false, 00:17:27.177 "data_offset": 0, 00:17:27.177 "data_size": 63488 00:17:27.177 }, 00:17:27.177 { 00:17:27.177 "name": null, 00:17:27.177 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:27.177 "is_configured": false, 00:17:27.177 "data_offset": 0, 00:17:27.177 "data_size": 63488 00:17:27.177 } 00:17:27.177 ] 00:17:27.177 }' 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.177 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.744 [2024-10-08 16:25:20.880009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.744 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.745 "name": "Existed_Raid", 00:17:27.745 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:27.745 "strip_size_kb": 64, 00:17:27.745 "state": "configuring", 00:17:27.745 "raid_level": "raid5f", 00:17:27.745 "superblock": true, 00:17:27.745 "num_base_bdevs": 3, 00:17:27.745 "num_base_bdevs_discovered": 2, 00:17:27.745 "num_base_bdevs_operational": 3, 00:17:27.745 "base_bdevs_list": [ 00:17:27.745 { 00:17:27.745 "name": "BaseBdev1", 00:17:27.745 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:27.745 "is_configured": true, 00:17:27.745 "data_offset": 2048, 00:17:27.745 "data_size": 63488 00:17:27.745 }, 00:17:27.745 { 00:17:27.745 "name": null, 00:17:27.745 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:27.745 "is_configured": false, 00:17:27.745 "data_offset": 0, 00:17:27.745 "data_size": 63488 00:17:27.745 }, 00:17:27.745 { 00:17:27.745 "name": "BaseBdev3", 00:17:27.745 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:27.745 "is_configured": true, 00:17:27.745 "data_offset": 2048, 00:17:27.745 "data_size": 63488 00:17:27.745 } 00:17:27.745 ] 00:17:27.745 }' 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.745 16:25:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.311 [2024-10-08 16:25:21.456247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.311 "name": "Existed_Raid", 00:17:28.311 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:28.311 "strip_size_kb": 64, 00:17:28.311 "state": "configuring", 00:17:28.311 "raid_level": "raid5f", 00:17:28.311 "superblock": true, 00:17:28.311 "num_base_bdevs": 3, 00:17:28.311 "num_base_bdevs_discovered": 1, 00:17:28.311 "num_base_bdevs_operational": 3, 00:17:28.311 "base_bdevs_list": [ 00:17:28.311 { 00:17:28.311 "name": null, 00:17:28.311 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:28.311 "is_configured": false, 00:17:28.311 "data_offset": 0, 00:17:28.311 "data_size": 63488 00:17:28.311 }, 00:17:28.311 { 00:17:28.311 "name": null, 00:17:28.311 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:28.311 "is_configured": false, 00:17:28.311 "data_offset": 0, 00:17:28.311 "data_size": 63488 00:17:28.311 }, 00:17:28.311 { 00:17:28.311 "name": "BaseBdev3", 00:17:28.311 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:28.311 "is_configured": true, 00:17:28.311 "data_offset": 2048, 00:17:28.311 "data_size": 63488 00:17:28.311 } 00:17:28.311 ] 00:17:28.311 }' 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.311 16:25:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.877 [2024-10-08 16:25:22.100334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.877 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.877 "name": "Existed_Raid", 00:17:28.877 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:28.877 "strip_size_kb": 64, 00:17:28.877 "state": "configuring", 00:17:28.877 "raid_level": "raid5f", 00:17:28.877 "superblock": true, 00:17:28.877 "num_base_bdevs": 3, 00:17:28.877 "num_base_bdevs_discovered": 2, 00:17:28.877 "num_base_bdevs_operational": 3, 00:17:28.877 "base_bdevs_list": [ 00:17:28.877 { 00:17:28.877 "name": null, 00:17:28.877 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:28.877 "is_configured": false, 00:17:28.877 "data_offset": 0, 00:17:28.877 "data_size": 63488 00:17:28.877 }, 00:17:28.877 { 00:17:28.877 "name": "BaseBdev2", 00:17:28.877 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:28.878 "is_configured": true, 00:17:28.878 "data_offset": 2048, 00:17:28.878 "data_size": 63488 00:17:28.878 }, 00:17:28.878 { 00:17:28.878 "name": "BaseBdev3", 00:17:28.878 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:28.878 "is_configured": true, 00:17:28.878 "data_offset": 2048, 00:17:28.878 "data_size": 63488 00:17:28.878 } 00:17:28.878 ] 00:17:28.878 }' 00:17:28.878 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.878 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce2c3d00-e807-4f37-a9cf-cea6c050f86a 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.443 [2024-10-08 16:25:22.759544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:29.443 [2024-10-08 16:25:22.759908] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:29.443 [2024-10-08 16:25:22.759936] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:29.443 [2024-10-08 16:25:22.760239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:29.443 NewBaseBdev 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.443 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.443 [2024-10-08 16:25:22.765327] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:29.443 [2024-10-08 16:25:22.765359] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:29.443 [2024-10-08 16:25:22.765691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.750 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.750 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:29.750 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.751 [ 00:17:29.751 { 00:17:29.751 "name": "NewBaseBdev", 00:17:29.751 "aliases": [ 00:17:29.751 "ce2c3d00-e807-4f37-a9cf-cea6c050f86a" 00:17:29.751 ], 00:17:29.751 "product_name": "Malloc disk", 00:17:29.751 "block_size": 512, 00:17:29.751 "num_blocks": 65536, 00:17:29.751 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:29.751 "assigned_rate_limits": { 00:17:29.751 "rw_ios_per_sec": 0, 00:17:29.751 "rw_mbytes_per_sec": 0, 00:17:29.751 "r_mbytes_per_sec": 0, 00:17:29.751 "w_mbytes_per_sec": 0 00:17:29.751 }, 00:17:29.751 "claimed": true, 00:17:29.751 "claim_type": "exclusive_write", 00:17:29.751 "zoned": false, 00:17:29.751 "supported_io_types": { 00:17:29.751 "read": true, 00:17:29.751 "write": true, 00:17:29.751 "unmap": true, 00:17:29.751 "flush": true, 00:17:29.751 "reset": true, 00:17:29.751 "nvme_admin": false, 00:17:29.751 "nvme_io": false, 00:17:29.751 "nvme_io_md": false, 00:17:29.751 "write_zeroes": true, 00:17:29.751 "zcopy": true, 00:17:29.751 "get_zone_info": false, 00:17:29.751 "zone_management": false, 00:17:29.751 "zone_append": false, 00:17:29.751 "compare": false, 00:17:29.751 "compare_and_write": false, 00:17:29.751 "abort": true, 00:17:29.751 "seek_hole": false, 00:17:29.751 "seek_data": false, 00:17:29.751 "copy": true, 00:17:29.751 "nvme_iov_md": false 00:17:29.751 }, 00:17:29.751 "memory_domains": [ 00:17:29.751 { 00:17:29.751 "dma_device_id": "system", 00:17:29.751 "dma_device_type": 1 00:17:29.751 }, 00:17:29.751 { 00:17:29.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.751 "dma_device_type": 2 00:17:29.751 } 00:17:29.751 ], 00:17:29.751 "driver_specific": {} 00:17:29.751 } 00:17:29.751 ] 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.751 "name": "Existed_Raid", 00:17:29.751 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:29.751 "strip_size_kb": 64, 00:17:29.751 "state": "online", 00:17:29.751 "raid_level": "raid5f", 00:17:29.751 "superblock": true, 00:17:29.751 "num_base_bdevs": 3, 00:17:29.751 "num_base_bdevs_discovered": 3, 00:17:29.751 "num_base_bdevs_operational": 3, 00:17:29.751 "base_bdevs_list": [ 00:17:29.751 { 00:17:29.751 "name": "NewBaseBdev", 00:17:29.751 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:29.751 "is_configured": true, 00:17:29.751 "data_offset": 2048, 00:17:29.751 "data_size": 63488 00:17:29.751 }, 00:17:29.751 { 00:17:29.751 "name": "BaseBdev2", 00:17:29.751 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:29.751 "is_configured": true, 00:17:29.751 "data_offset": 2048, 00:17:29.751 "data_size": 63488 00:17:29.751 }, 00:17:29.751 { 00:17:29.751 "name": "BaseBdev3", 00:17:29.751 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:29.751 "is_configured": true, 00:17:29.751 "data_offset": 2048, 00:17:29.751 "data_size": 63488 00:17:29.751 } 00:17:29.751 ] 00:17:29.751 }' 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.751 16:25:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.042 [2024-10-08 16:25:23.319961] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.042 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.300 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.300 "name": "Existed_Raid", 00:17:30.300 "aliases": [ 00:17:30.300 "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d" 00:17:30.300 ], 00:17:30.300 "product_name": "Raid Volume", 00:17:30.300 "block_size": 512, 00:17:30.300 "num_blocks": 126976, 00:17:30.300 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:30.300 "assigned_rate_limits": { 00:17:30.300 "rw_ios_per_sec": 0, 00:17:30.300 "rw_mbytes_per_sec": 0, 00:17:30.300 "r_mbytes_per_sec": 0, 00:17:30.300 "w_mbytes_per_sec": 0 00:17:30.300 }, 00:17:30.300 "claimed": false, 00:17:30.300 "zoned": false, 00:17:30.300 "supported_io_types": { 00:17:30.300 "read": true, 00:17:30.300 "write": true, 00:17:30.300 "unmap": false, 00:17:30.300 "flush": false, 00:17:30.300 "reset": true, 00:17:30.300 "nvme_admin": false, 00:17:30.300 "nvme_io": false, 00:17:30.300 "nvme_io_md": false, 00:17:30.300 "write_zeroes": true, 00:17:30.300 "zcopy": false, 00:17:30.300 "get_zone_info": false, 00:17:30.300 "zone_management": false, 00:17:30.300 "zone_append": false, 00:17:30.300 "compare": false, 00:17:30.300 "compare_and_write": false, 00:17:30.300 "abort": false, 00:17:30.300 "seek_hole": false, 00:17:30.300 "seek_data": false, 00:17:30.300 "copy": false, 00:17:30.300 "nvme_iov_md": false 00:17:30.300 }, 00:17:30.300 "driver_specific": { 00:17:30.300 "raid": { 00:17:30.300 "uuid": "6c7c1fc8-3f0b-417e-bce4-7d9826efde7d", 00:17:30.300 "strip_size_kb": 64, 00:17:30.300 "state": "online", 00:17:30.300 "raid_level": "raid5f", 00:17:30.300 "superblock": true, 00:17:30.300 "num_base_bdevs": 3, 00:17:30.300 "num_base_bdevs_discovered": 3, 00:17:30.300 "num_base_bdevs_operational": 3, 00:17:30.300 "base_bdevs_list": [ 00:17:30.300 { 00:17:30.300 "name": "NewBaseBdev", 00:17:30.300 "uuid": "ce2c3d00-e807-4f37-a9cf-cea6c050f86a", 00:17:30.300 "is_configured": true, 00:17:30.300 "data_offset": 2048, 00:17:30.300 "data_size": 63488 00:17:30.300 }, 00:17:30.300 { 00:17:30.300 "name": "BaseBdev2", 00:17:30.300 "uuid": "608fa060-6ceb-45c1-b204-ce9068aab7fa", 00:17:30.300 "is_configured": true, 00:17:30.300 "data_offset": 2048, 00:17:30.300 "data_size": 63488 00:17:30.300 }, 00:17:30.300 { 00:17:30.300 "name": "BaseBdev3", 00:17:30.300 "uuid": "079470b4-e513-4340-910c-f69d9f524fb3", 00:17:30.300 "is_configured": true, 00:17:30.301 "data_offset": 2048, 00:17:30.301 "data_size": 63488 00:17:30.301 } 00:17:30.301 ] 00:17:30.301 } 00:17:30.301 } 00:17:30.301 }' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:30.301 BaseBdev2 00:17:30.301 BaseBdev3' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.301 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.559 [2024-10-08 16:25:23.623734] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.559 [2024-10-08 16:25:23.623771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.559 [2024-10-08 16:25:23.623862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.559 [2024-10-08 16:25:23.624224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.559 [2024-10-08 16:25:23.624258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81231 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81231 ']' 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81231 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81231 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.559 killing process with pid 81231 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81231' 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81231 00:17:30.559 [2024-10-08 16:25:23.658759] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.559 16:25:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81231 00:17:30.818 [2024-10-08 16:25:23.929376] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.192 16:25:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:32.192 00:17:32.192 real 0m12.015s 00:17:32.192 user 0m19.854s 00:17:32.192 sys 0m1.711s 00:17:32.192 16:25:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.192 16:25:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.192 ************************************ 00:17:32.192 END TEST raid5f_state_function_test_sb 00:17:32.192 ************************************ 00:17:32.192 16:25:25 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:32.192 16:25:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:32.192 16:25:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.192 16:25:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.192 ************************************ 00:17:32.192 START TEST raid5f_superblock_test 00:17:32.192 ************************************ 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81863 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81863 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81863 ']' 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.192 16:25:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.192 [2024-10-08 16:25:25.287752] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:17:32.192 [2024-10-08 16:25:25.288021] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81863 ] 00:17:32.192 [2024-10-08 16:25:25.464711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.459 [2024-10-08 16:25:25.713342] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.742 [2024-10-08 16:25:25.921499] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.742 [2024-10-08 16:25:25.921571] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.001 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.260 malloc1 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.260 [2024-10-08 16:25:26.343712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.260 [2024-10-08 16:25:26.343795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.260 [2024-10-08 16:25:26.343826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.260 [2024-10-08 16:25:26.343844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.260 [2024-10-08 16:25:26.346771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.260 [2024-10-08 16:25:26.346820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.260 pt1 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.260 malloc2 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.260 [2024-10-08 16:25:26.410858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.260 [2024-10-08 16:25:26.410928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.260 [2024-10-08 16:25:26.410963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.260 [2024-10-08 16:25:26.410978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.260 [2024-10-08 16:25:26.413932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.260 [2024-10-08 16:25:26.413980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.260 pt2 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.260 malloc3 00:17:33.260 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.261 [2024-10-08 16:25:26.468558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:33.261 [2024-10-08 16:25:26.468627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.261 [2024-10-08 16:25:26.468657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:33.261 [2024-10-08 16:25:26.468672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.261 [2024-10-08 16:25:26.471548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.261 [2024-10-08 16:25:26.471642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:33.261 pt3 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.261 [2024-10-08 16:25:26.476640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.261 [2024-10-08 16:25:26.479063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.261 [2024-10-08 16:25:26.479169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:33.261 [2024-10-08 16:25:26.479394] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:33.261 [2024-10-08 16:25:26.479435] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:33.261 [2024-10-08 16:25:26.479748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:33.261 [2024-10-08 16:25:26.484904] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:33.261 [2024-10-08 16:25:26.484953] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:33.261 [2024-10-08 16:25:26.485197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.261 "name": "raid_bdev1", 00:17:33.261 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:33.261 "strip_size_kb": 64, 00:17:33.261 "state": "online", 00:17:33.261 "raid_level": "raid5f", 00:17:33.261 "superblock": true, 00:17:33.261 "num_base_bdevs": 3, 00:17:33.261 "num_base_bdevs_discovered": 3, 00:17:33.261 "num_base_bdevs_operational": 3, 00:17:33.261 "base_bdevs_list": [ 00:17:33.261 { 00:17:33.261 "name": "pt1", 00:17:33.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.261 "is_configured": true, 00:17:33.261 "data_offset": 2048, 00:17:33.261 "data_size": 63488 00:17:33.261 }, 00:17:33.261 { 00:17:33.261 "name": "pt2", 00:17:33.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.261 "is_configured": true, 00:17:33.261 "data_offset": 2048, 00:17:33.261 "data_size": 63488 00:17:33.261 }, 00:17:33.261 { 00:17:33.261 "name": "pt3", 00:17:33.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.261 "is_configured": true, 00:17:33.261 "data_offset": 2048, 00:17:33.261 "data_size": 63488 00:17:33.261 } 00:17:33.261 ] 00:17:33.261 }' 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.261 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.828 [2024-10-08 16:25:26.971225] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.828 16:25:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.828 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.828 "name": "raid_bdev1", 00:17:33.828 "aliases": [ 00:17:33.828 "514fc82b-19ef-4b80-90ae-4a4b9b3e814c" 00:17:33.828 ], 00:17:33.828 "product_name": "Raid Volume", 00:17:33.828 "block_size": 512, 00:17:33.828 "num_blocks": 126976, 00:17:33.828 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:33.828 "assigned_rate_limits": { 00:17:33.828 "rw_ios_per_sec": 0, 00:17:33.828 "rw_mbytes_per_sec": 0, 00:17:33.828 "r_mbytes_per_sec": 0, 00:17:33.828 "w_mbytes_per_sec": 0 00:17:33.828 }, 00:17:33.828 "claimed": false, 00:17:33.828 "zoned": false, 00:17:33.828 "supported_io_types": { 00:17:33.828 "read": true, 00:17:33.828 "write": true, 00:17:33.828 "unmap": false, 00:17:33.828 "flush": false, 00:17:33.828 "reset": true, 00:17:33.828 "nvme_admin": false, 00:17:33.828 "nvme_io": false, 00:17:33.828 "nvme_io_md": false, 00:17:33.828 "write_zeroes": true, 00:17:33.828 "zcopy": false, 00:17:33.828 "get_zone_info": false, 00:17:33.828 "zone_management": false, 00:17:33.828 "zone_append": false, 00:17:33.828 "compare": false, 00:17:33.828 "compare_and_write": false, 00:17:33.828 "abort": false, 00:17:33.828 "seek_hole": false, 00:17:33.828 "seek_data": false, 00:17:33.828 "copy": false, 00:17:33.828 "nvme_iov_md": false 00:17:33.828 }, 00:17:33.828 "driver_specific": { 00:17:33.828 "raid": { 00:17:33.828 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:33.828 "strip_size_kb": 64, 00:17:33.828 "state": "online", 00:17:33.828 "raid_level": "raid5f", 00:17:33.828 "superblock": true, 00:17:33.828 "num_base_bdevs": 3, 00:17:33.828 "num_base_bdevs_discovered": 3, 00:17:33.828 "num_base_bdevs_operational": 3, 00:17:33.828 "base_bdevs_list": [ 00:17:33.828 { 00:17:33.828 "name": "pt1", 00:17:33.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.828 "is_configured": true, 00:17:33.828 "data_offset": 2048, 00:17:33.828 "data_size": 63488 00:17:33.828 }, 00:17:33.828 { 00:17:33.828 "name": "pt2", 00:17:33.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.828 "is_configured": true, 00:17:33.828 "data_offset": 2048, 00:17:33.828 "data_size": 63488 00:17:33.828 }, 00:17:33.828 { 00:17:33.828 "name": "pt3", 00:17:33.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.828 "is_configured": true, 00:17:33.828 "data_offset": 2048, 00:17:33.828 "data_size": 63488 00:17:33.828 } 00:17:33.828 ] 00:17:33.828 } 00:17:33.828 } 00:17:33.828 }' 00:17:33.828 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.828 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:33.828 pt2 00:17:33.828 pt3' 00:17:33.828 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.828 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.829 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.829 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:33.829 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.829 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.829 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.829 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:34.087 [2024-10-08 16:25:27.295192] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=514fc82b-19ef-4b80-90ae-4a4b9b3e814c 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 514fc82b-19ef-4b80-90ae-4a4b9b3e814c ']' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.087 [2024-10-08 16:25:27.346997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.087 [2024-10-08 16:25:27.347035] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.087 [2024-10-08 16:25:27.347121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.087 [2024-10-08 16:25:27.347217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.087 [2024-10-08 16:25:27.347234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.087 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.346 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.346 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.346 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:34.346 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.346 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 [2024-10-08 16:25:27.495085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.347 [2024-10-08 16:25:27.497706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.347 [2024-10-08 16:25:27.497787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:34.347 [2024-10-08 16:25:27.497866] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:34.347 [2024-10-08 16:25:27.497970] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:34.347 [2024-10-08 16:25:27.498020] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:34.347 [2024-10-08 16:25:27.498048] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.347 [2024-10-08 16:25:27.498064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:34.347 request: 00:17:34.347 { 00:17:34.347 "name": "raid_bdev1", 00:17:34.347 "raid_level": "raid5f", 00:17:34.347 "base_bdevs": [ 00:17:34.347 "malloc1", 00:17:34.347 "malloc2", 00:17:34.347 "malloc3" 00:17:34.347 ], 00:17:34.347 "strip_size_kb": 64, 00:17:34.347 "superblock": false, 00:17:34.347 "method": "bdev_raid_create", 00:17:34.347 "req_id": 1 00:17:34.347 } 00:17:34.347 Got JSON-RPC error response 00:17:34.347 response: 00:17:34.347 { 00:17:34.347 "code": -17, 00:17:34.347 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.347 } 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 [2024-10-08 16:25:27.555035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.347 [2024-10-08 16:25:27.555125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.347 [2024-10-08 16:25:27.555154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:34.347 [2024-10-08 16:25:27.555167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.347 [2024-10-08 16:25:27.558046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.347 [2024-10-08 16:25:27.558108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.347 [2024-10-08 16:25:27.558211] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:34.347 [2024-10-08 16:25:27.558272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.347 pt1 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.347 "name": "raid_bdev1", 00:17:34.347 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:34.347 "strip_size_kb": 64, 00:17:34.347 "state": "configuring", 00:17:34.347 "raid_level": "raid5f", 00:17:34.347 "superblock": true, 00:17:34.347 "num_base_bdevs": 3, 00:17:34.347 "num_base_bdevs_discovered": 1, 00:17:34.347 "num_base_bdevs_operational": 3, 00:17:34.347 "base_bdevs_list": [ 00:17:34.347 { 00:17:34.347 "name": "pt1", 00:17:34.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.347 "is_configured": true, 00:17:34.347 "data_offset": 2048, 00:17:34.347 "data_size": 63488 00:17:34.347 }, 00:17:34.347 { 00:17:34.347 "name": null, 00:17:34.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.347 "is_configured": false, 00:17:34.347 "data_offset": 2048, 00:17:34.347 "data_size": 63488 00:17:34.347 }, 00:17:34.347 { 00:17:34.347 "name": null, 00:17:34.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.347 "is_configured": false, 00:17:34.347 "data_offset": 2048, 00:17:34.347 "data_size": 63488 00:17:34.347 } 00:17:34.347 ] 00:17:34.347 }' 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.347 16:25:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.917 [2024-10-08 16:25:28.059262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.917 [2024-10-08 16:25:28.059341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.917 [2024-10-08 16:25:28.059375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:34.917 [2024-10-08 16:25:28.059390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.917 [2024-10-08 16:25:28.059967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.917 [2024-10-08 16:25:28.060009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.917 [2024-10-08 16:25:28.060121] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.917 [2024-10-08 16:25:28.060153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.917 pt2 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.917 [2024-10-08 16:25:28.067250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.917 "name": "raid_bdev1", 00:17:34.917 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:34.917 "strip_size_kb": 64, 00:17:34.917 "state": "configuring", 00:17:34.917 "raid_level": "raid5f", 00:17:34.917 "superblock": true, 00:17:34.917 "num_base_bdevs": 3, 00:17:34.917 "num_base_bdevs_discovered": 1, 00:17:34.917 "num_base_bdevs_operational": 3, 00:17:34.917 "base_bdevs_list": [ 00:17:34.917 { 00:17:34.917 "name": "pt1", 00:17:34.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.917 "is_configured": true, 00:17:34.917 "data_offset": 2048, 00:17:34.917 "data_size": 63488 00:17:34.917 }, 00:17:34.917 { 00:17:34.917 "name": null, 00:17:34.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.917 "is_configured": false, 00:17:34.917 "data_offset": 0, 00:17:34.917 "data_size": 63488 00:17:34.917 }, 00:17:34.917 { 00:17:34.917 "name": null, 00:17:34.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.917 "is_configured": false, 00:17:34.917 "data_offset": 2048, 00:17:34.917 "data_size": 63488 00:17:34.917 } 00:17:34.917 ] 00:17:34.917 }' 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.917 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 [2024-10-08 16:25:28.575364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.484 [2024-10-08 16:25:28.575544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.484 [2024-10-08 16:25:28.575600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:35.484 [2024-10-08 16:25:28.575621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.484 [2024-10-08 16:25:28.576198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.484 [2024-10-08 16:25:28.576239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.484 [2024-10-08 16:25:28.576341] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.484 [2024-10-08 16:25:28.576393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.484 pt2 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 [2024-10-08 16:25:28.583364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:35.484 [2024-10-08 16:25:28.583483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.484 [2024-10-08 16:25:28.583504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:35.484 [2024-10-08 16:25:28.583551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.484 [2024-10-08 16:25:28.584036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.484 [2024-10-08 16:25:28.584088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:35.484 [2024-10-08 16:25:28.584176] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:35.484 [2024-10-08 16:25:28.584225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:35.484 [2024-10-08 16:25:28.584416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:35.484 [2024-10-08 16:25:28.584446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:35.484 [2024-10-08 16:25:28.584772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:35.484 [2024-10-08 16:25:28.590025] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:35.484 [2024-10-08 16:25:28.590052] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:35.484 [2024-10-08 16:25:28.590321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.484 pt3 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.484 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.484 "name": "raid_bdev1", 00:17:35.484 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:35.484 "strip_size_kb": 64, 00:17:35.485 "state": "online", 00:17:35.485 "raid_level": "raid5f", 00:17:35.485 "superblock": true, 00:17:35.485 "num_base_bdevs": 3, 00:17:35.485 "num_base_bdevs_discovered": 3, 00:17:35.485 "num_base_bdevs_operational": 3, 00:17:35.485 "base_bdevs_list": [ 00:17:35.485 { 00:17:35.485 "name": "pt1", 00:17:35.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.485 "is_configured": true, 00:17:35.485 "data_offset": 2048, 00:17:35.485 "data_size": 63488 00:17:35.485 }, 00:17:35.485 { 00:17:35.485 "name": "pt2", 00:17:35.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.485 "is_configured": true, 00:17:35.485 "data_offset": 2048, 00:17:35.485 "data_size": 63488 00:17:35.485 }, 00:17:35.485 { 00:17:35.485 "name": "pt3", 00:17:35.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.485 "is_configured": true, 00:17:35.485 "data_offset": 2048, 00:17:35.485 "data_size": 63488 00:17:35.485 } 00:17:35.485 ] 00:17:35.485 }' 00:17:35.485 16:25:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.485 16:25:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.051 [2024-10-08 16:25:29.108434] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.051 "name": "raid_bdev1", 00:17:36.051 "aliases": [ 00:17:36.051 "514fc82b-19ef-4b80-90ae-4a4b9b3e814c" 00:17:36.051 ], 00:17:36.051 "product_name": "Raid Volume", 00:17:36.051 "block_size": 512, 00:17:36.051 "num_blocks": 126976, 00:17:36.051 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:36.051 "assigned_rate_limits": { 00:17:36.051 "rw_ios_per_sec": 0, 00:17:36.051 "rw_mbytes_per_sec": 0, 00:17:36.051 "r_mbytes_per_sec": 0, 00:17:36.051 "w_mbytes_per_sec": 0 00:17:36.051 }, 00:17:36.051 "claimed": false, 00:17:36.051 "zoned": false, 00:17:36.051 "supported_io_types": { 00:17:36.051 "read": true, 00:17:36.051 "write": true, 00:17:36.051 "unmap": false, 00:17:36.051 "flush": false, 00:17:36.051 "reset": true, 00:17:36.051 "nvme_admin": false, 00:17:36.051 "nvme_io": false, 00:17:36.051 "nvme_io_md": false, 00:17:36.051 "write_zeroes": true, 00:17:36.051 "zcopy": false, 00:17:36.051 "get_zone_info": false, 00:17:36.051 "zone_management": false, 00:17:36.051 "zone_append": false, 00:17:36.051 "compare": false, 00:17:36.051 "compare_and_write": false, 00:17:36.051 "abort": false, 00:17:36.051 "seek_hole": false, 00:17:36.051 "seek_data": false, 00:17:36.051 "copy": false, 00:17:36.051 "nvme_iov_md": false 00:17:36.051 }, 00:17:36.051 "driver_specific": { 00:17:36.051 "raid": { 00:17:36.051 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:36.051 "strip_size_kb": 64, 00:17:36.051 "state": "online", 00:17:36.051 "raid_level": "raid5f", 00:17:36.051 "superblock": true, 00:17:36.051 "num_base_bdevs": 3, 00:17:36.051 "num_base_bdevs_discovered": 3, 00:17:36.051 "num_base_bdevs_operational": 3, 00:17:36.051 "base_bdevs_list": [ 00:17:36.051 { 00:17:36.051 "name": "pt1", 00:17:36.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.051 "is_configured": true, 00:17:36.051 "data_offset": 2048, 00:17:36.051 "data_size": 63488 00:17:36.051 }, 00:17:36.051 { 00:17:36.051 "name": "pt2", 00:17:36.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.051 "is_configured": true, 00:17:36.051 "data_offset": 2048, 00:17:36.051 "data_size": 63488 00:17:36.051 }, 00:17:36.051 { 00:17:36.051 "name": "pt3", 00:17:36.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.051 "is_configured": true, 00:17:36.051 "data_offset": 2048, 00:17:36.051 "data_size": 63488 00:17:36.051 } 00:17:36.051 ] 00:17:36.051 } 00:17:36.051 } 00:17:36.051 }' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:36.051 pt2 00:17:36.051 pt3' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.051 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:36.309 [2024-10-08 16:25:29.416504] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 514fc82b-19ef-4b80-90ae-4a4b9b3e814c '!=' 514fc82b-19ef-4b80-90ae-4a4b9b3e814c ']' 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.309 [2024-10-08 16:25:29.456334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.309 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.310 "name": "raid_bdev1", 00:17:36.310 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:36.310 "strip_size_kb": 64, 00:17:36.310 "state": "online", 00:17:36.310 "raid_level": "raid5f", 00:17:36.310 "superblock": true, 00:17:36.310 "num_base_bdevs": 3, 00:17:36.310 "num_base_bdevs_discovered": 2, 00:17:36.310 "num_base_bdevs_operational": 2, 00:17:36.310 "base_bdevs_list": [ 00:17:36.310 { 00:17:36.310 "name": null, 00:17:36.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.310 "is_configured": false, 00:17:36.310 "data_offset": 0, 00:17:36.310 "data_size": 63488 00:17:36.310 }, 00:17:36.310 { 00:17:36.310 "name": "pt2", 00:17:36.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.310 "is_configured": true, 00:17:36.310 "data_offset": 2048, 00:17:36.310 "data_size": 63488 00:17:36.310 }, 00:17:36.310 { 00:17:36.310 "name": "pt3", 00:17:36.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.310 "is_configured": true, 00:17:36.310 "data_offset": 2048, 00:17:36.310 "data_size": 63488 00:17:36.310 } 00:17:36.310 ] 00:17:36.310 }' 00:17:36.310 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.310 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.874 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.874 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.874 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.874 [2024-10-08 16:25:29.956411] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.874 [2024-10-08 16:25:29.956457] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.874 [2024-10-08 16:25:29.956581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.874 [2024-10-08 16:25:29.956665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.875 [2024-10-08 16:25:29.956690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:36.875 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.875 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.875 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.875 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.875 16:25:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:36.875 16:25:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.875 [2024-10-08 16:25:30.040401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.875 [2024-10-08 16:25:30.040477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.875 [2024-10-08 16:25:30.040504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:36.875 [2024-10-08 16:25:30.040535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.875 [2024-10-08 16:25:30.043607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.875 [2024-10-08 16:25:30.043666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.875 [2024-10-08 16:25:30.043773] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:36.875 [2024-10-08 16:25:30.043839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.875 pt2 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.875 "name": "raid_bdev1", 00:17:36.875 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:36.875 "strip_size_kb": 64, 00:17:36.875 "state": "configuring", 00:17:36.875 "raid_level": "raid5f", 00:17:36.875 "superblock": true, 00:17:36.875 "num_base_bdevs": 3, 00:17:36.875 "num_base_bdevs_discovered": 1, 00:17:36.875 "num_base_bdevs_operational": 2, 00:17:36.875 "base_bdevs_list": [ 00:17:36.875 { 00:17:36.875 "name": null, 00:17:36.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.875 "is_configured": false, 00:17:36.875 "data_offset": 2048, 00:17:36.875 "data_size": 63488 00:17:36.875 }, 00:17:36.875 { 00:17:36.875 "name": "pt2", 00:17:36.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.875 "is_configured": true, 00:17:36.875 "data_offset": 2048, 00:17:36.875 "data_size": 63488 00:17:36.875 }, 00:17:36.875 { 00:17:36.875 "name": null, 00:17:36.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.875 "is_configured": false, 00:17:36.875 "data_offset": 2048, 00:17:36.875 "data_size": 63488 00:17:36.875 } 00:17:36.875 ] 00:17:36.875 }' 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.875 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.440 [2024-10-08 16:25:30.568568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.440 [2024-10-08 16:25:30.568798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.440 [2024-10-08 16:25:30.568880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:37.440 [2024-10-08 16:25:30.569019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.440 [2024-10-08 16:25:30.569652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.440 [2024-10-08 16:25:30.569831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.440 [2024-10-08 16:25:30.570064] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:37.440 [2024-10-08 16:25:30.570237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.440 [2024-10-08 16:25:30.570408] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:37.440 [2024-10-08 16:25:30.570431] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:37.440 [2024-10-08 16:25:30.570751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:37.440 [2024-10-08 16:25:30.575688] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:37.440 pt3 00:17:37.440 [2024-10-08 16:25:30.575843] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:37.440 [2024-10-08 16:25:30.576261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.440 "name": "raid_bdev1", 00:17:37.440 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:37.440 "strip_size_kb": 64, 00:17:37.440 "state": "online", 00:17:37.440 "raid_level": "raid5f", 00:17:37.440 "superblock": true, 00:17:37.440 "num_base_bdevs": 3, 00:17:37.440 "num_base_bdevs_discovered": 2, 00:17:37.440 "num_base_bdevs_operational": 2, 00:17:37.440 "base_bdevs_list": [ 00:17:37.440 { 00:17:37.440 "name": null, 00:17:37.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.440 "is_configured": false, 00:17:37.440 "data_offset": 2048, 00:17:37.440 "data_size": 63488 00:17:37.440 }, 00:17:37.440 { 00:17:37.440 "name": "pt2", 00:17:37.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.440 "is_configured": true, 00:17:37.440 "data_offset": 2048, 00:17:37.440 "data_size": 63488 00:17:37.440 }, 00:17:37.440 { 00:17:37.440 "name": "pt3", 00:17:37.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:37.440 "is_configured": true, 00:17:37.440 "data_offset": 2048, 00:17:37.440 "data_size": 63488 00:17:37.440 } 00:17:37.440 ] 00:17:37.440 }' 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.440 16:25:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.007 [2024-10-08 16:25:31.077933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.007 [2024-10-08 16:25:31.077972] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.007 [2024-10-08 16:25:31.078068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.007 [2024-10-08 16:25:31.078154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.007 [2024-10-08 16:25:31.078171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.007 [2024-10-08 16:25:31.141942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:38.007 [2024-10-08 16:25:31.142147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.007 [2024-10-08 16:25:31.142188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:38.007 [2024-10-08 16:25:31.142204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.007 [2024-10-08 16:25:31.145160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.007 [2024-10-08 16:25:31.145352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:38.007 [2024-10-08 16:25:31.145471] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:38.007 [2024-10-08 16:25:31.145547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:38.007 [2024-10-08 16:25:31.145722] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:38.007 [2024-10-08 16:25:31.145744] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.007 [2024-10-08 16:25:31.145767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:38.007 [2024-10-08 16:25:31.145838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:38.007 pt1 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.007 "name": "raid_bdev1", 00:17:38.007 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:38.007 "strip_size_kb": 64, 00:17:38.007 "state": "configuring", 00:17:38.007 "raid_level": "raid5f", 00:17:38.007 "superblock": true, 00:17:38.007 "num_base_bdevs": 3, 00:17:38.007 "num_base_bdevs_discovered": 1, 00:17:38.007 "num_base_bdevs_operational": 2, 00:17:38.007 "base_bdevs_list": [ 00:17:38.007 { 00:17:38.007 "name": null, 00:17:38.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.007 "is_configured": false, 00:17:38.007 "data_offset": 2048, 00:17:38.007 "data_size": 63488 00:17:38.007 }, 00:17:38.007 { 00:17:38.007 "name": "pt2", 00:17:38.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.007 "is_configured": true, 00:17:38.007 "data_offset": 2048, 00:17:38.007 "data_size": 63488 00:17:38.007 }, 00:17:38.007 { 00:17:38.007 "name": null, 00:17:38.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.007 "is_configured": false, 00:17:38.007 "data_offset": 2048, 00:17:38.007 "data_size": 63488 00:17:38.007 } 00:17:38.007 ] 00:17:38.007 }' 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.007 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.574 [2024-10-08 16:25:31.726170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:38.574 [2024-10-08 16:25:31.726247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.574 [2024-10-08 16:25:31.726280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:38.574 [2024-10-08 16:25:31.726295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.574 [2024-10-08 16:25:31.726889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.574 [2024-10-08 16:25:31.726933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:38.574 [2024-10-08 16:25:31.727034] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:38.574 [2024-10-08 16:25:31.727066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:38.574 [2024-10-08 16:25:31.727220] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:38.574 [2024-10-08 16:25:31.727236] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:38.574 [2024-10-08 16:25:31.727613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:38.574 [2024-10-08 16:25:31.732844] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:38.574 [2024-10-08 16:25:31.732878] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:38.574 [2024-10-08 16:25:31.733166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.574 pt3 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.574 "name": "raid_bdev1", 00:17:38.574 "uuid": "514fc82b-19ef-4b80-90ae-4a4b9b3e814c", 00:17:38.574 "strip_size_kb": 64, 00:17:38.574 "state": "online", 00:17:38.574 "raid_level": "raid5f", 00:17:38.574 "superblock": true, 00:17:38.574 "num_base_bdevs": 3, 00:17:38.574 "num_base_bdevs_discovered": 2, 00:17:38.574 "num_base_bdevs_operational": 2, 00:17:38.574 "base_bdevs_list": [ 00:17:38.574 { 00:17:38.574 "name": null, 00:17:38.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.574 "is_configured": false, 00:17:38.574 "data_offset": 2048, 00:17:38.574 "data_size": 63488 00:17:38.574 }, 00:17:38.574 { 00:17:38.574 "name": "pt2", 00:17:38.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.574 "is_configured": true, 00:17:38.574 "data_offset": 2048, 00:17:38.574 "data_size": 63488 00:17:38.574 }, 00:17:38.574 { 00:17:38.574 "name": "pt3", 00:17:38.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.574 "is_configured": true, 00:17:38.574 "data_offset": 2048, 00:17:38.574 "data_size": 63488 00:17:38.574 } 00:17:38.574 ] 00:17:38.574 }' 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.574 16:25:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.141 [2024-10-08 16:25:32.271155] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 514fc82b-19ef-4b80-90ae-4a4b9b3e814c '!=' 514fc82b-19ef-4b80-90ae-4a4b9b3e814c ']' 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81863 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81863 ']' 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81863 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81863 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81863' 00:17:39.141 killing process with pid 81863 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81863 00:17:39.141 [2024-10-08 16:25:32.356285] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.141 [2024-10-08 16:25:32.356434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.141 16:25:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81863 00:17:39.141 [2024-10-08 16:25:32.356553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.141 [2024-10-08 16:25:32.356578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:39.400 [2024-10-08 16:25:32.622381] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.774 16:25:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:40.774 00:17:40.774 real 0m8.657s 00:17:40.774 user 0m13.988s 00:17:40.774 sys 0m1.242s 00:17:40.774 ************************************ 00:17:40.774 END TEST raid5f_superblock_test 00:17:40.774 ************************************ 00:17:40.774 16:25:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.774 16:25:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.774 16:25:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:40.774 16:25:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:40.774 16:25:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:40.774 16:25:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.774 16:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.774 ************************************ 00:17:40.774 START TEST raid5f_rebuild_test 00:17:40.774 ************************************ 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:40.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82307 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82307 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 82307 ']' 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.774 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.775 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.775 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.775 16:25:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.775 [2024-10-08 16:25:34.008502] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:17:40.775 [2024-10-08 16:25:34.008943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82307 ] 00:17:40.775 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:40.775 Zero copy mechanism will not be used. 00:17:41.032 [2024-10-08 16:25:34.185223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.291 [2024-10-08 16:25:34.466909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.549 [2024-10-08 16:25:34.721172] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.549 [2024-10-08 16:25:34.721544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.808 16:25:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.808 16:25:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:17:41.808 16:25:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.808 16:25:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.808 16:25:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.808 16:25:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.808 BaseBdev1_malloc 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.808 [2024-10-08 16:25:35.014924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:41.808 [2024-10-08 16:25:35.015175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.808 [2024-10-08 16:25:35.015256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:41.808 [2024-10-08 16:25:35.015558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.808 [2024-10-08 16:25:35.018351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.808 [2024-10-08 16:25:35.018405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.808 BaseBdev1 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.808 BaseBdev2_malloc 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.808 [2024-10-08 16:25:35.081744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:41.808 [2024-10-08 16:25:35.081824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.808 [2024-10-08 16:25:35.081855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:41.808 [2024-10-08 16:25:35.081874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.808 [2024-10-08 16:25:35.084775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.808 [2024-10-08 16:25:35.084972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:41.808 BaseBdev2 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.808 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 BaseBdev3_malloc 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 [2024-10-08 16:25:35.139533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:42.073 [2024-10-08 16:25:35.139618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.073 [2024-10-08 16:25:35.139651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:42.073 [2024-10-08 16:25:35.139671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.073 [2024-10-08 16:25:35.142461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.073 [2024-10-08 16:25:35.142514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:42.073 BaseBdev3 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 spare_malloc 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 spare_delay 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.074 [2024-10-08 16:25:35.200978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.074 [2024-10-08 16:25:35.201180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.074 [2024-10-08 16:25:35.201219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:42.074 [2024-10-08 16:25:35.201239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.074 [2024-10-08 16:25:35.204065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.074 [2024-10-08 16:25:35.204119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.074 spare 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.074 [2024-10-08 16:25:35.209114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.074 [2024-10-08 16:25:35.211568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.074 [2024-10-08 16:25:35.211837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.074 [2024-10-08 16:25:35.212010] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:42.074 [2024-10-08 16:25:35.212034] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:42.074 [2024-10-08 16:25:35.212448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:42.074 [2024-10-08 16:25:35.217673] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.074 [2024-10-08 16:25:35.217724] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.074 [2024-10-08 16:25:35.217982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.074 "name": "raid_bdev1", 00:17:42.074 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:42.074 "strip_size_kb": 64, 00:17:42.074 "state": "online", 00:17:42.074 "raid_level": "raid5f", 00:17:42.074 "superblock": false, 00:17:42.074 "num_base_bdevs": 3, 00:17:42.074 "num_base_bdevs_discovered": 3, 00:17:42.074 "num_base_bdevs_operational": 3, 00:17:42.074 "base_bdevs_list": [ 00:17:42.074 { 00:17:42.074 "name": "BaseBdev1", 00:17:42.074 "uuid": "13e47635-3cc1-5f6f-bdf0-dcf9ae125797", 00:17:42.074 "is_configured": true, 00:17:42.074 "data_offset": 0, 00:17:42.074 "data_size": 65536 00:17:42.074 }, 00:17:42.074 { 00:17:42.074 "name": "BaseBdev2", 00:17:42.074 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:42.074 "is_configured": true, 00:17:42.074 "data_offset": 0, 00:17:42.074 "data_size": 65536 00:17:42.074 }, 00:17:42.074 { 00:17:42.074 "name": "BaseBdev3", 00:17:42.074 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:42.074 "is_configured": true, 00:17:42.074 "data_offset": 0, 00:17:42.074 "data_size": 65536 00:17:42.074 } 00:17:42.074 ] 00:17:42.074 }' 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.074 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.640 [2024-10-08 16:25:35.744244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.640 16:25:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:42.899 [2024-10-08 16:25:36.148417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:42.899 /dev/nbd0 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.899 1+0 records in 00:17:42.899 1+0 records out 00:17:42.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033224 s, 12.3 MB/s 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:42.899 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:43.157 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:43.415 512+0 records in 00:17:43.415 512+0 records out 00:17:43.415 67108864 bytes (67 MB, 64 MiB) copied, 0.510331 s, 132 MB/s 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.674 16:25:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.932 [2024-10-08 16:25:37.032136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.932 [2024-10-08 16:25:37.061956] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.932 "name": "raid_bdev1", 00:17:43.932 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:43.932 "strip_size_kb": 64, 00:17:43.932 "state": "online", 00:17:43.932 "raid_level": "raid5f", 00:17:43.932 "superblock": false, 00:17:43.932 "num_base_bdevs": 3, 00:17:43.932 "num_base_bdevs_discovered": 2, 00:17:43.932 "num_base_bdevs_operational": 2, 00:17:43.932 "base_bdevs_list": [ 00:17:43.932 { 00:17:43.932 "name": null, 00:17:43.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.932 "is_configured": false, 00:17:43.932 "data_offset": 0, 00:17:43.932 "data_size": 65536 00:17:43.932 }, 00:17:43.932 { 00:17:43.932 "name": "BaseBdev2", 00:17:43.932 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:43.932 "is_configured": true, 00:17:43.932 "data_offset": 0, 00:17:43.932 "data_size": 65536 00:17:43.932 }, 00:17:43.932 { 00:17:43.932 "name": "BaseBdev3", 00:17:43.932 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:43.932 "is_configured": true, 00:17:43.932 "data_offset": 0, 00:17:43.932 "data_size": 65536 00:17:43.932 } 00:17:43.932 ] 00:17:43.932 }' 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.932 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.498 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.498 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.498 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.498 [2024-10-08 16:25:37.586160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.498 [2024-10-08 16:25:37.600692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:44.498 16:25:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.498 16:25:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:44.498 [2024-10-08 16:25:37.607905] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.432 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.432 "name": "raid_bdev1", 00:17:45.432 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:45.432 "strip_size_kb": 64, 00:17:45.432 "state": "online", 00:17:45.432 "raid_level": "raid5f", 00:17:45.432 "superblock": false, 00:17:45.432 "num_base_bdevs": 3, 00:17:45.432 "num_base_bdevs_discovered": 3, 00:17:45.432 "num_base_bdevs_operational": 3, 00:17:45.432 "process": { 00:17:45.432 "type": "rebuild", 00:17:45.432 "target": "spare", 00:17:45.432 "progress": { 00:17:45.432 "blocks": 18432, 00:17:45.432 "percent": 14 00:17:45.432 } 00:17:45.432 }, 00:17:45.432 "base_bdevs_list": [ 00:17:45.432 { 00:17:45.432 "name": "spare", 00:17:45.432 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:45.432 "is_configured": true, 00:17:45.432 "data_offset": 0, 00:17:45.432 "data_size": 65536 00:17:45.432 }, 00:17:45.432 { 00:17:45.432 "name": "BaseBdev2", 00:17:45.432 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:45.432 "is_configured": true, 00:17:45.432 "data_offset": 0, 00:17:45.432 "data_size": 65536 00:17:45.432 }, 00:17:45.432 { 00:17:45.432 "name": "BaseBdev3", 00:17:45.433 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:45.433 "is_configured": true, 00:17:45.433 "data_offset": 0, 00:17:45.433 "data_size": 65536 00:17:45.433 } 00:17:45.433 ] 00:17:45.433 }' 00:17:45.433 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.433 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.433 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.691 [2024-10-08 16:25:38.774492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.691 [2024-10-08 16:25:38.822859] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.691 [2024-10-08 16:25:38.822942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.691 [2024-10-08 16:25:38.822976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.691 [2024-10-08 16:25:38.822989] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.691 "name": "raid_bdev1", 00:17:45.691 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:45.691 "strip_size_kb": 64, 00:17:45.691 "state": "online", 00:17:45.691 "raid_level": "raid5f", 00:17:45.691 "superblock": false, 00:17:45.691 "num_base_bdevs": 3, 00:17:45.691 "num_base_bdevs_discovered": 2, 00:17:45.691 "num_base_bdevs_operational": 2, 00:17:45.691 "base_bdevs_list": [ 00:17:45.691 { 00:17:45.691 "name": null, 00:17:45.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.691 "is_configured": false, 00:17:45.691 "data_offset": 0, 00:17:45.691 "data_size": 65536 00:17:45.691 }, 00:17:45.691 { 00:17:45.691 "name": "BaseBdev2", 00:17:45.691 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:45.691 "is_configured": true, 00:17:45.691 "data_offset": 0, 00:17:45.691 "data_size": 65536 00:17:45.691 }, 00:17:45.691 { 00:17:45.691 "name": "BaseBdev3", 00:17:45.691 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:45.691 "is_configured": true, 00:17:45.691 "data_offset": 0, 00:17:45.691 "data_size": 65536 00:17:45.691 } 00:17:45.691 ] 00:17:45.691 }' 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.691 16:25:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.257 "name": "raid_bdev1", 00:17:46.257 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:46.257 "strip_size_kb": 64, 00:17:46.257 "state": "online", 00:17:46.257 "raid_level": "raid5f", 00:17:46.257 "superblock": false, 00:17:46.257 "num_base_bdevs": 3, 00:17:46.257 "num_base_bdevs_discovered": 2, 00:17:46.257 "num_base_bdevs_operational": 2, 00:17:46.257 "base_bdevs_list": [ 00:17:46.257 { 00:17:46.257 "name": null, 00:17:46.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.257 "is_configured": false, 00:17:46.257 "data_offset": 0, 00:17:46.257 "data_size": 65536 00:17:46.257 }, 00:17:46.257 { 00:17:46.257 "name": "BaseBdev2", 00:17:46.257 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:46.257 "is_configured": true, 00:17:46.257 "data_offset": 0, 00:17:46.257 "data_size": 65536 00:17:46.257 }, 00:17:46.257 { 00:17:46.257 "name": "BaseBdev3", 00:17:46.257 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:46.257 "is_configured": true, 00:17:46.257 "data_offset": 0, 00:17:46.257 "data_size": 65536 00:17:46.257 } 00:17:46.257 ] 00:17:46.257 }' 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.257 [2024-10-08 16:25:39.500258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.257 [2024-10-08 16:25:39.513663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.257 16:25:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:46.257 [2024-10-08 16:25:39.520879] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.633 "name": "raid_bdev1", 00:17:47.633 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:47.633 "strip_size_kb": 64, 00:17:47.633 "state": "online", 00:17:47.633 "raid_level": "raid5f", 00:17:47.633 "superblock": false, 00:17:47.633 "num_base_bdevs": 3, 00:17:47.633 "num_base_bdevs_discovered": 3, 00:17:47.633 "num_base_bdevs_operational": 3, 00:17:47.633 "process": { 00:17:47.633 "type": "rebuild", 00:17:47.633 "target": "spare", 00:17:47.633 "progress": { 00:17:47.633 "blocks": 18432, 00:17:47.633 "percent": 14 00:17:47.633 } 00:17:47.633 }, 00:17:47.633 "base_bdevs_list": [ 00:17:47.633 { 00:17:47.633 "name": "spare", 00:17:47.633 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:47.633 "is_configured": true, 00:17:47.633 "data_offset": 0, 00:17:47.633 "data_size": 65536 00:17:47.633 }, 00:17:47.633 { 00:17:47.633 "name": "BaseBdev2", 00:17:47.633 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:47.633 "is_configured": true, 00:17:47.633 "data_offset": 0, 00:17:47.633 "data_size": 65536 00:17:47.633 }, 00:17:47.633 { 00:17:47.633 "name": "BaseBdev3", 00:17:47.633 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:47.633 "is_configured": true, 00:17:47.633 "data_offset": 0, 00:17:47.633 "data_size": 65536 00:17:47.633 } 00:17:47.633 ] 00:17:47.633 }' 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=610 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.633 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.634 "name": "raid_bdev1", 00:17:47.634 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:47.634 "strip_size_kb": 64, 00:17:47.634 "state": "online", 00:17:47.634 "raid_level": "raid5f", 00:17:47.634 "superblock": false, 00:17:47.634 "num_base_bdevs": 3, 00:17:47.634 "num_base_bdevs_discovered": 3, 00:17:47.634 "num_base_bdevs_operational": 3, 00:17:47.634 "process": { 00:17:47.634 "type": "rebuild", 00:17:47.634 "target": "spare", 00:17:47.634 "progress": { 00:17:47.634 "blocks": 22528, 00:17:47.634 "percent": 17 00:17:47.634 } 00:17:47.634 }, 00:17:47.634 "base_bdevs_list": [ 00:17:47.634 { 00:17:47.634 "name": "spare", 00:17:47.634 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:47.634 "is_configured": true, 00:17:47.634 "data_offset": 0, 00:17:47.634 "data_size": 65536 00:17:47.634 }, 00:17:47.634 { 00:17:47.634 "name": "BaseBdev2", 00:17:47.634 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:47.634 "is_configured": true, 00:17:47.634 "data_offset": 0, 00:17:47.634 "data_size": 65536 00:17:47.634 }, 00:17:47.634 { 00:17:47.634 "name": "BaseBdev3", 00:17:47.634 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:47.634 "is_configured": true, 00:17:47.634 "data_offset": 0, 00:17:47.634 "data_size": 65536 00:17:47.634 } 00:17:47.634 ] 00:17:47.634 }' 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.634 16:25:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.571 16:25:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.835 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.835 "name": "raid_bdev1", 00:17:48.835 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:48.835 "strip_size_kb": 64, 00:17:48.835 "state": "online", 00:17:48.835 "raid_level": "raid5f", 00:17:48.835 "superblock": false, 00:17:48.835 "num_base_bdevs": 3, 00:17:48.835 "num_base_bdevs_discovered": 3, 00:17:48.835 "num_base_bdevs_operational": 3, 00:17:48.835 "process": { 00:17:48.835 "type": "rebuild", 00:17:48.835 "target": "spare", 00:17:48.835 "progress": { 00:17:48.835 "blocks": 47104, 00:17:48.835 "percent": 35 00:17:48.835 } 00:17:48.835 }, 00:17:48.835 "base_bdevs_list": [ 00:17:48.835 { 00:17:48.835 "name": "spare", 00:17:48.835 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:48.835 "is_configured": true, 00:17:48.835 "data_offset": 0, 00:17:48.835 "data_size": 65536 00:17:48.835 }, 00:17:48.835 { 00:17:48.835 "name": "BaseBdev2", 00:17:48.835 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:48.835 "is_configured": true, 00:17:48.835 "data_offset": 0, 00:17:48.835 "data_size": 65536 00:17:48.835 }, 00:17:48.835 { 00:17:48.835 "name": "BaseBdev3", 00:17:48.835 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:48.835 "is_configured": true, 00:17:48.835 "data_offset": 0, 00:17:48.835 "data_size": 65536 00:17:48.835 } 00:17:48.835 ] 00:17:48.835 }' 00:17:48.835 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.835 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.835 16:25:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.835 16:25:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.835 16:25:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.770 "name": "raid_bdev1", 00:17:49.770 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:49.770 "strip_size_kb": 64, 00:17:49.770 "state": "online", 00:17:49.770 "raid_level": "raid5f", 00:17:49.770 "superblock": false, 00:17:49.770 "num_base_bdevs": 3, 00:17:49.770 "num_base_bdevs_discovered": 3, 00:17:49.770 "num_base_bdevs_operational": 3, 00:17:49.770 "process": { 00:17:49.770 "type": "rebuild", 00:17:49.770 "target": "spare", 00:17:49.770 "progress": { 00:17:49.770 "blocks": 69632, 00:17:49.770 "percent": 53 00:17:49.770 } 00:17:49.770 }, 00:17:49.770 "base_bdevs_list": [ 00:17:49.770 { 00:17:49.770 "name": "spare", 00:17:49.770 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:49.770 "is_configured": true, 00:17:49.770 "data_offset": 0, 00:17:49.770 "data_size": 65536 00:17:49.770 }, 00:17:49.770 { 00:17:49.770 "name": "BaseBdev2", 00:17:49.770 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:49.770 "is_configured": true, 00:17:49.770 "data_offset": 0, 00:17:49.770 "data_size": 65536 00:17:49.770 }, 00:17:49.770 { 00:17:49.770 "name": "BaseBdev3", 00:17:49.770 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:49.770 "is_configured": true, 00:17:49.770 "data_offset": 0, 00:17:49.770 "data_size": 65536 00:17:49.770 } 00:17:49.770 ] 00:17:49.770 }' 00:17:49.770 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.029 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.029 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.029 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.029 16:25:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.963 "name": "raid_bdev1", 00:17:50.963 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:50.963 "strip_size_kb": 64, 00:17:50.963 "state": "online", 00:17:50.963 "raid_level": "raid5f", 00:17:50.963 "superblock": false, 00:17:50.963 "num_base_bdevs": 3, 00:17:50.963 "num_base_bdevs_discovered": 3, 00:17:50.963 "num_base_bdevs_operational": 3, 00:17:50.963 "process": { 00:17:50.963 "type": "rebuild", 00:17:50.963 "target": "spare", 00:17:50.963 "progress": { 00:17:50.963 "blocks": 94208, 00:17:50.963 "percent": 71 00:17:50.963 } 00:17:50.963 }, 00:17:50.963 "base_bdevs_list": [ 00:17:50.963 { 00:17:50.963 "name": "spare", 00:17:50.963 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:50.963 "is_configured": true, 00:17:50.963 "data_offset": 0, 00:17:50.963 "data_size": 65536 00:17:50.963 }, 00:17:50.963 { 00:17:50.963 "name": "BaseBdev2", 00:17:50.963 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:50.963 "is_configured": true, 00:17:50.963 "data_offset": 0, 00:17:50.963 "data_size": 65536 00:17:50.963 }, 00:17:50.963 { 00:17:50.963 "name": "BaseBdev3", 00:17:50.963 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:50.963 "is_configured": true, 00:17:50.963 "data_offset": 0, 00:17:50.963 "data_size": 65536 00:17:50.963 } 00:17:50.963 ] 00:17:50.963 }' 00:17:50.963 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.221 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.221 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.221 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.221 16:25:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.164 "name": "raid_bdev1", 00:17:52.164 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:52.164 "strip_size_kb": 64, 00:17:52.164 "state": "online", 00:17:52.164 "raid_level": "raid5f", 00:17:52.164 "superblock": false, 00:17:52.164 "num_base_bdevs": 3, 00:17:52.164 "num_base_bdevs_discovered": 3, 00:17:52.164 "num_base_bdevs_operational": 3, 00:17:52.164 "process": { 00:17:52.164 "type": "rebuild", 00:17:52.164 "target": "spare", 00:17:52.164 "progress": { 00:17:52.164 "blocks": 116736, 00:17:52.164 "percent": 89 00:17:52.164 } 00:17:52.164 }, 00:17:52.164 "base_bdevs_list": [ 00:17:52.164 { 00:17:52.164 "name": "spare", 00:17:52.164 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:52.164 "is_configured": true, 00:17:52.164 "data_offset": 0, 00:17:52.164 "data_size": 65536 00:17:52.164 }, 00:17:52.164 { 00:17:52.164 "name": "BaseBdev2", 00:17:52.164 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:52.164 "is_configured": true, 00:17:52.164 "data_offset": 0, 00:17:52.164 "data_size": 65536 00:17:52.164 }, 00:17:52.164 { 00:17:52.164 "name": "BaseBdev3", 00:17:52.164 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:52.164 "is_configured": true, 00:17:52.164 "data_offset": 0, 00:17:52.164 "data_size": 65536 00:17:52.164 } 00:17:52.164 ] 00:17:52.164 }' 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.164 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.429 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.429 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.429 16:25:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.687 [2024-10-08 16:25:46.001575] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:52.687 [2024-10-08 16:25:46.001695] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:52.687 [2024-10-08 16:25:46.001761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.251 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.509 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.509 "name": "raid_bdev1", 00:17:53.509 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:53.509 "strip_size_kb": 64, 00:17:53.509 "state": "online", 00:17:53.509 "raid_level": "raid5f", 00:17:53.509 "superblock": false, 00:17:53.509 "num_base_bdevs": 3, 00:17:53.510 "num_base_bdevs_discovered": 3, 00:17:53.510 "num_base_bdevs_operational": 3, 00:17:53.510 "base_bdevs_list": [ 00:17:53.510 { 00:17:53.510 "name": "spare", 00:17:53.510 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:53.510 "is_configured": true, 00:17:53.510 "data_offset": 0, 00:17:53.510 "data_size": 65536 00:17:53.510 }, 00:17:53.510 { 00:17:53.510 "name": "BaseBdev2", 00:17:53.510 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:53.510 "is_configured": true, 00:17:53.510 "data_offset": 0, 00:17:53.510 "data_size": 65536 00:17:53.510 }, 00:17:53.510 { 00:17:53.510 "name": "BaseBdev3", 00:17:53.510 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:53.510 "is_configured": true, 00:17:53.510 "data_offset": 0, 00:17:53.510 "data_size": 65536 00:17:53.510 } 00:17:53.510 ] 00:17:53.510 }' 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.510 "name": "raid_bdev1", 00:17:53.510 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:53.510 "strip_size_kb": 64, 00:17:53.510 "state": "online", 00:17:53.510 "raid_level": "raid5f", 00:17:53.510 "superblock": false, 00:17:53.510 "num_base_bdevs": 3, 00:17:53.510 "num_base_bdevs_discovered": 3, 00:17:53.510 "num_base_bdevs_operational": 3, 00:17:53.510 "base_bdevs_list": [ 00:17:53.510 { 00:17:53.510 "name": "spare", 00:17:53.510 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:53.510 "is_configured": true, 00:17:53.510 "data_offset": 0, 00:17:53.510 "data_size": 65536 00:17:53.510 }, 00:17:53.510 { 00:17:53.510 "name": "BaseBdev2", 00:17:53.510 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:53.510 "is_configured": true, 00:17:53.510 "data_offset": 0, 00:17:53.510 "data_size": 65536 00:17:53.510 }, 00:17:53.510 { 00:17:53.510 "name": "BaseBdev3", 00:17:53.510 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:53.510 "is_configured": true, 00:17:53.510 "data_offset": 0, 00:17:53.510 "data_size": 65536 00:17:53.510 } 00:17:53.510 ] 00:17:53.510 }' 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.510 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.769 "name": "raid_bdev1", 00:17:53.769 "uuid": "e85cfabd-bfa4-465a-999c-6b56f5d46836", 00:17:53.769 "strip_size_kb": 64, 00:17:53.769 "state": "online", 00:17:53.769 "raid_level": "raid5f", 00:17:53.769 "superblock": false, 00:17:53.769 "num_base_bdevs": 3, 00:17:53.769 "num_base_bdevs_discovered": 3, 00:17:53.769 "num_base_bdevs_operational": 3, 00:17:53.769 "base_bdevs_list": [ 00:17:53.769 { 00:17:53.769 "name": "spare", 00:17:53.769 "uuid": "1a9b3464-21d3-54b6-b641-490ba5a9d03a", 00:17:53.769 "is_configured": true, 00:17:53.769 "data_offset": 0, 00:17:53.769 "data_size": 65536 00:17:53.769 }, 00:17:53.769 { 00:17:53.769 "name": "BaseBdev2", 00:17:53.769 "uuid": "832021c6-365b-5fb2-abd9-9268597c9ee6", 00:17:53.769 "is_configured": true, 00:17:53.769 "data_offset": 0, 00:17:53.769 "data_size": 65536 00:17:53.769 }, 00:17:53.769 { 00:17:53.769 "name": "BaseBdev3", 00:17:53.769 "uuid": "94ee9cf3-7f34-579d-ad0f-e4589a40f878", 00:17:53.769 "is_configured": true, 00:17:53.769 "data_offset": 0, 00:17:53.769 "data_size": 65536 00:17:53.769 } 00:17:53.769 ] 00:17:53.769 }' 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.769 16:25:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.335 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:54.335 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.335 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.335 [2024-10-08 16:25:47.455372] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.335 [2024-10-08 16:25:47.455426] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.335 [2024-10-08 16:25:47.455555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.335 [2024-10-08 16:25:47.455663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.335 [2024-10-08 16:25:47.455688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:54.336 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:54.611 /dev/nbd0 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:54.611 1+0 records in 00:17:54.611 1+0 records out 00:17:54.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338201 s, 12.1 MB/s 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:54.611 16:25:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:54.889 /dev/nbd1 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:55.147 1+0 records in 00:17:55.147 1+0 records out 00:17:55.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297635 s, 13.8 MB/s 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:55.147 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:55.148 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:55.714 16:25:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:55.972 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82307 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 82307 ']' 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 82307 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82307 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.973 killing process with pid 82307 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82307' 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 82307 00:17:55.973 Received shutdown signal, test time was about 60.000000 seconds 00:17:55.973 00:17:55.973 Latency(us) 00:17:55.973 [2024-10-08T16:25:49.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.973 [2024-10-08T16:25:49.295Z] =================================================================================================================== 00:17:55.973 [2024-10-08T16:25:49.295Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.973 [2024-10-08 16:25:49.149741] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.973 16:25:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 82307 00:17:56.231 [2024-10-08 16:25:49.501399] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:57.605 00:17:57.605 real 0m16.796s 00:17:57.605 user 0m21.391s 00:17:57.605 sys 0m2.168s 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.605 ************************************ 00:17:57.605 END TEST raid5f_rebuild_test 00:17:57.605 ************************************ 00:17:57.605 16:25:50 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:57.605 16:25:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:57.605 16:25:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.605 16:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.605 ************************************ 00:17:57.605 START TEST raid5f_rebuild_test_sb 00:17:57.605 ************************************ 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82765 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82765 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82765 ']' 00:17:57.605 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.606 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.606 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.606 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.606 16:25:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.606 [2024-10-08 16:25:50.893240] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:17:57.606 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:57.606 Zero copy mechanism will not be used. 00:17:57.606 [2024-10-08 16:25:50.893952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82765 ] 00:17:57.864 [2024-10-08 16:25:51.059645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.122 [2024-10-08 16:25:51.361003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.379 [2024-10-08 16:25:51.604423] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.379 [2024-10-08 16:25:51.604468] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.638 BaseBdev1_malloc 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.638 [2024-10-08 16:25:51.939047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:58.638 [2024-10-08 16:25:51.939139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.638 [2024-10-08 16:25:51.939196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:58.638 [2024-10-08 16:25:51.939235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.638 [2024-10-08 16:25:51.942149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.638 [2024-10-08 16:25:51.942193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:58.638 BaseBdev1 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.638 16:25:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 BaseBdev2_malloc 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 [2024-10-08 16:25:52.009643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:58.898 [2024-10-08 16:25:52.009728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.898 [2024-10-08 16:25:52.009758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:58.898 [2024-10-08 16:25:52.009780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.898 [2024-10-08 16:25:52.012628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.898 [2024-10-08 16:25:52.012675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:58.898 BaseBdev2 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 BaseBdev3_malloc 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 [2024-10-08 16:25:52.066614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:58.898 [2024-10-08 16:25:52.066687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.898 [2024-10-08 16:25:52.066718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:58.898 [2024-10-08 16:25:52.066737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.898 [2024-10-08 16:25:52.069572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.898 [2024-10-08 16:25:52.069652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:58.898 BaseBdev3 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 spare_malloc 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 spare_delay 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 [2024-10-08 16:25:52.131453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.898 [2024-10-08 16:25:52.131535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.898 [2024-10-08 16:25:52.131581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:58.898 [2024-10-08 16:25:52.131599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.898 [2024-10-08 16:25:52.134451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.898 [2024-10-08 16:25:52.134502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.898 spare 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 [2024-10-08 16:25:52.143553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.898 [2024-10-08 16:25:52.145948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.898 [2024-10-08 16:25:52.146051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.898 [2024-10-08 16:25:52.146298] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:58.898 [2024-10-08 16:25:52.146327] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:58.898 [2024-10-08 16:25:52.146664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:58.898 [2024-10-08 16:25:52.151817] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:58.898 [2024-10-08 16:25:52.151853] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:58.898 [2024-10-08 16:25:52.152096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.898 "name": "raid_bdev1", 00:17:58.898 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:17:58.898 "strip_size_kb": 64, 00:17:58.898 "state": "online", 00:17:58.898 "raid_level": "raid5f", 00:17:58.898 "superblock": true, 00:17:58.898 "num_base_bdevs": 3, 00:17:58.898 "num_base_bdevs_discovered": 3, 00:17:58.898 "num_base_bdevs_operational": 3, 00:17:58.898 "base_bdevs_list": [ 00:17:58.898 { 00:17:58.898 "name": "BaseBdev1", 00:17:58.898 "uuid": "06706872-1548-511f-815d-dbd675827216", 00:17:58.898 "is_configured": true, 00:17:58.898 "data_offset": 2048, 00:17:58.898 "data_size": 63488 00:17:58.898 }, 00:17:58.898 { 00:17:58.898 "name": "BaseBdev2", 00:17:58.898 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:17:58.898 "is_configured": true, 00:17:58.898 "data_offset": 2048, 00:17:58.898 "data_size": 63488 00:17:58.898 }, 00:17:58.898 { 00:17:58.898 "name": "BaseBdev3", 00:17:58.898 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:17:58.898 "is_configured": true, 00:17:58.898 "data_offset": 2048, 00:17:58.898 "data_size": 63488 00:17:58.898 } 00:17:58.898 ] 00:17:58.898 }' 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.898 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:59.464 [2024-10-08 16:25:52.690160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.464 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:59.722 16:25:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:59.981 [2024-10-08 16:25:53.058121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:59.981 /dev/nbd0 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.981 1+0 records in 00:17:59.981 1+0 records out 00:17:59.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354096 s, 11.6 MB/s 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:59.981 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:00.617 496+0 records in 00:18:00.617 496+0 records out 00:18:00.617 65011712 bytes (65 MB, 62 MiB) copied, 0.473212 s, 137 MB/s 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:00.617 [2024-10-08 16:25:53.900227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.617 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.875 [2024-10-08 16:25:53.942100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.875 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.876 16:25:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.876 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.876 "name": "raid_bdev1", 00:18:00.876 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:00.876 "strip_size_kb": 64, 00:18:00.876 "state": "online", 00:18:00.876 "raid_level": "raid5f", 00:18:00.876 "superblock": true, 00:18:00.876 "num_base_bdevs": 3, 00:18:00.876 "num_base_bdevs_discovered": 2, 00:18:00.876 "num_base_bdevs_operational": 2, 00:18:00.876 "base_bdevs_list": [ 00:18:00.876 { 00:18:00.876 "name": null, 00:18:00.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.876 "is_configured": false, 00:18:00.876 "data_offset": 0, 00:18:00.876 "data_size": 63488 00:18:00.876 }, 00:18:00.876 { 00:18:00.876 "name": "BaseBdev2", 00:18:00.876 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:00.876 "is_configured": true, 00:18:00.876 "data_offset": 2048, 00:18:00.876 "data_size": 63488 00:18:00.876 }, 00:18:00.876 { 00:18:00.876 "name": "BaseBdev3", 00:18:00.876 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:00.876 "is_configured": true, 00:18:00.876 "data_offset": 2048, 00:18:00.876 "data_size": 63488 00:18:00.876 } 00:18:00.876 ] 00:18:00.876 }' 00:18:00.876 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.876 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.442 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.442 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 [2024-10-08 16:25:54.466246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.442 [2024-10-08 16:25:54.480608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:01.442 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.442 16:25:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:01.442 [2024-10-08 16:25:54.487947] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.375 "name": "raid_bdev1", 00:18:02.375 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:02.375 "strip_size_kb": 64, 00:18:02.375 "state": "online", 00:18:02.375 "raid_level": "raid5f", 00:18:02.375 "superblock": true, 00:18:02.375 "num_base_bdevs": 3, 00:18:02.375 "num_base_bdevs_discovered": 3, 00:18:02.375 "num_base_bdevs_operational": 3, 00:18:02.375 "process": { 00:18:02.375 "type": "rebuild", 00:18:02.375 "target": "spare", 00:18:02.375 "progress": { 00:18:02.375 "blocks": 20480, 00:18:02.375 "percent": 16 00:18:02.375 } 00:18:02.375 }, 00:18:02.375 "base_bdevs_list": [ 00:18:02.375 { 00:18:02.375 "name": "spare", 00:18:02.375 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:02.375 "is_configured": true, 00:18:02.375 "data_offset": 2048, 00:18:02.375 "data_size": 63488 00:18:02.375 }, 00:18:02.375 { 00:18:02.375 "name": "BaseBdev2", 00:18:02.375 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:02.375 "is_configured": true, 00:18:02.375 "data_offset": 2048, 00:18:02.375 "data_size": 63488 00:18:02.375 }, 00:18:02.375 { 00:18:02.375 "name": "BaseBdev3", 00:18:02.375 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:02.375 "is_configured": true, 00:18:02.375 "data_offset": 2048, 00:18:02.375 "data_size": 63488 00:18:02.375 } 00:18:02.375 ] 00:18:02.375 }' 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.375 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.375 [2024-10-08 16:25:55.653665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.633 [2024-10-08 16:25:55.703345] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.633 [2024-10-08 16:25:55.703450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.633 [2024-10-08 16:25:55.703480] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.633 [2024-10-08 16:25:55.703493] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.633 "name": "raid_bdev1", 00:18:02.633 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:02.633 "strip_size_kb": 64, 00:18:02.633 "state": "online", 00:18:02.633 "raid_level": "raid5f", 00:18:02.633 "superblock": true, 00:18:02.633 "num_base_bdevs": 3, 00:18:02.633 "num_base_bdevs_discovered": 2, 00:18:02.633 "num_base_bdevs_operational": 2, 00:18:02.633 "base_bdevs_list": [ 00:18:02.633 { 00:18:02.633 "name": null, 00:18:02.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.633 "is_configured": false, 00:18:02.633 "data_offset": 0, 00:18:02.633 "data_size": 63488 00:18:02.633 }, 00:18:02.633 { 00:18:02.633 "name": "BaseBdev2", 00:18:02.633 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:02.633 "is_configured": true, 00:18:02.633 "data_offset": 2048, 00:18:02.633 "data_size": 63488 00:18:02.633 }, 00:18:02.633 { 00:18:02.633 "name": "BaseBdev3", 00:18:02.633 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:02.633 "is_configured": true, 00:18:02.633 "data_offset": 2048, 00:18:02.633 "data_size": 63488 00:18:02.633 } 00:18:02.633 ] 00:18:02.633 }' 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.633 16:25:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.200 "name": "raid_bdev1", 00:18:03.200 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:03.200 "strip_size_kb": 64, 00:18:03.200 "state": "online", 00:18:03.200 "raid_level": "raid5f", 00:18:03.200 "superblock": true, 00:18:03.200 "num_base_bdevs": 3, 00:18:03.200 "num_base_bdevs_discovered": 2, 00:18:03.200 "num_base_bdevs_operational": 2, 00:18:03.200 "base_bdevs_list": [ 00:18:03.200 { 00:18:03.200 "name": null, 00:18:03.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.200 "is_configured": false, 00:18:03.200 "data_offset": 0, 00:18:03.200 "data_size": 63488 00:18:03.200 }, 00:18:03.200 { 00:18:03.200 "name": "BaseBdev2", 00:18:03.200 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:03.200 "is_configured": true, 00:18:03.200 "data_offset": 2048, 00:18:03.200 "data_size": 63488 00:18:03.200 }, 00:18:03.200 { 00:18:03.200 "name": "BaseBdev3", 00:18:03.200 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:03.200 "is_configured": true, 00:18:03.200 "data_offset": 2048, 00:18:03.200 "data_size": 63488 00:18:03.200 } 00:18:03.200 ] 00:18:03.200 }' 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.200 [2024-10-08 16:25:56.397164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.200 [2024-10-08 16:25:56.410782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.200 16:25:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:03.200 [2024-10-08 16:25:56.418135] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.135 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.394 "name": "raid_bdev1", 00:18:04.394 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:04.394 "strip_size_kb": 64, 00:18:04.394 "state": "online", 00:18:04.394 "raid_level": "raid5f", 00:18:04.394 "superblock": true, 00:18:04.394 "num_base_bdevs": 3, 00:18:04.394 "num_base_bdevs_discovered": 3, 00:18:04.394 "num_base_bdevs_operational": 3, 00:18:04.394 "process": { 00:18:04.394 "type": "rebuild", 00:18:04.394 "target": "spare", 00:18:04.394 "progress": { 00:18:04.394 "blocks": 18432, 00:18:04.394 "percent": 14 00:18:04.394 } 00:18:04.394 }, 00:18:04.394 "base_bdevs_list": [ 00:18:04.394 { 00:18:04.394 "name": "spare", 00:18:04.394 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:04.394 "is_configured": true, 00:18:04.394 "data_offset": 2048, 00:18:04.394 "data_size": 63488 00:18:04.394 }, 00:18:04.394 { 00:18:04.394 "name": "BaseBdev2", 00:18:04.394 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:04.394 "is_configured": true, 00:18:04.394 "data_offset": 2048, 00:18:04.394 "data_size": 63488 00:18:04.394 }, 00:18:04.394 { 00:18:04.394 "name": "BaseBdev3", 00:18:04.394 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:04.394 "is_configured": true, 00:18:04.394 "data_offset": 2048, 00:18:04.394 "data_size": 63488 00:18:04.394 } 00:18:04.394 ] 00:18:04.394 }' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:04.394 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=627 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.394 "name": "raid_bdev1", 00:18:04.394 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:04.394 "strip_size_kb": 64, 00:18:04.394 "state": "online", 00:18:04.394 "raid_level": "raid5f", 00:18:04.394 "superblock": true, 00:18:04.394 "num_base_bdevs": 3, 00:18:04.394 "num_base_bdevs_discovered": 3, 00:18:04.394 "num_base_bdevs_operational": 3, 00:18:04.394 "process": { 00:18:04.394 "type": "rebuild", 00:18:04.394 "target": "spare", 00:18:04.394 "progress": { 00:18:04.394 "blocks": 22528, 00:18:04.394 "percent": 17 00:18:04.394 } 00:18:04.394 }, 00:18:04.394 "base_bdevs_list": [ 00:18:04.394 { 00:18:04.394 "name": "spare", 00:18:04.394 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:04.394 "is_configured": true, 00:18:04.394 "data_offset": 2048, 00:18:04.394 "data_size": 63488 00:18:04.394 }, 00:18:04.394 { 00:18:04.394 "name": "BaseBdev2", 00:18:04.394 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:04.394 "is_configured": true, 00:18:04.394 "data_offset": 2048, 00:18:04.394 "data_size": 63488 00:18:04.394 }, 00:18:04.394 { 00:18:04.394 "name": "BaseBdev3", 00:18:04.394 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:04.394 "is_configured": true, 00:18:04.394 "data_offset": 2048, 00:18:04.394 "data_size": 63488 00:18:04.394 } 00:18:04.394 ] 00:18:04.394 }' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.394 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.652 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.652 16:25:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.587 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.588 "name": "raid_bdev1", 00:18:05.588 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:05.588 "strip_size_kb": 64, 00:18:05.588 "state": "online", 00:18:05.588 "raid_level": "raid5f", 00:18:05.588 "superblock": true, 00:18:05.588 "num_base_bdevs": 3, 00:18:05.588 "num_base_bdevs_discovered": 3, 00:18:05.588 "num_base_bdevs_operational": 3, 00:18:05.588 "process": { 00:18:05.588 "type": "rebuild", 00:18:05.588 "target": "spare", 00:18:05.588 "progress": { 00:18:05.588 "blocks": 47104, 00:18:05.588 "percent": 37 00:18:05.588 } 00:18:05.588 }, 00:18:05.588 "base_bdevs_list": [ 00:18:05.588 { 00:18:05.588 "name": "spare", 00:18:05.588 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:05.588 "is_configured": true, 00:18:05.588 "data_offset": 2048, 00:18:05.588 "data_size": 63488 00:18:05.588 }, 00:18:05.588 { 00:18:05.588 "name": "BaseBdev2", 00:18:05.588 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:05.588 "is_configured": true, 00:18:05.588 "data_offset": 2048, 00:18:05.588 "data_size": 63488 00:18:05.588 }, 00:18:05.588 { 00:18:05.588 "name": "BaseBdev3", 00:18:05.588 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:05.588 "is_configured": true, 00:18:05.588 "data_offset": 2048, 00:18:05.588 "data_size": 63488 00:18:05.588 } 00:18:05.588 ] 00:18:05.588 }' 00:18:05.588 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.588 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.588 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.845 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.845 16:25:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.780 "name": "raid_bdev1", 00:18:06.780 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:06.780 "strip_size_kb": 64, 00:18:06.780 "state": "online", 00:18:06.780 "raid_level": "raid5f", 00:18:06.780 "superblock": true, 00:18:06.780 "num_base_bdevs": 3, 00:18:06.780 "num_base_bdevs_discovered": 3, 00:18:06.780 "num_base_bdevs_operational": 3, 00:18:06.780 "process": { 00:18:06.780 "type": "rebuild", 00:18:06.780 "target": "spare", 00:18:06.780 "progress": { 00:18:06.780 "blocks": 69632, 00:18:06.780 "percent": 54 00:18:06.780 } 00:18:06.780 }, 00:18:06.780 "base_bdevs_list": [ 00:18:06.780 { 00:18:06.780 "name": "spare", 00:18:06.780 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:06.780 "is_configured": true, 00:18:06.780 "data_offset": 2048, 00:18:06.780 "data_size": 63488 00:18:06.780 }, 00:18:06.780 { 00:18:06.780 "name": "BaseBdev2", 00:18:06.780 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:06.780 "is_configured": true, 00:18:06.780 "data_offset": 2048, 00:18:06.780 "data_size": 63488 00:18:06.780 }, 00:18:06.780 { 00:18:06.780 "name": "BaseBdev3", 00:18:06.780 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:06.780 "is_configured": true, 00:18:06.780 "data_offset": 2048, 00:18:06.780 "data_size": 63488 00:18:06.780 } 00:18:06.780 ] 00:18:06.780 }' 00:18:06.780 16:25:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.780 16:26:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.780 16:26:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.781 16:26:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.781 16:26:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.156 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.156 "name": "raid_bdev1", 00:18:08.156 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:08.156 "strip_size_kb": 64, 00:18:08.156 "state": "online", 00:18:08.156 "raid_level": "raid5f", 00:18:08.156 "superblock": true, 00:18:08.156 "num_base_bdevs": 3, 00:18:08.156 "num_base_bdevs_discovered": 3, 00:18:08.156 "num_base_bdevs_operational": 3, 00:18:08.156 "process": { 00:18:08.156 "type": "rebuild", 00:18:08.156 "target": "spare", 00:18:08.156 "progress": { 00:18:08.156 "blocks": 94208, 00:18:08.156 "percent": 74 00:18:08.156 } 00:18:08.156 }, 00:18:08.156 "base_bdevs_list": [ 00:18:08.156 { 00:18:08.156 "name": "spare", 00:18:08.156 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:08.156 "is_configured": true, 00:18:08.156 "data_offset": 2048, 00:18:08.156 "data_size": 63488 00:18:08.156 }, 00:18:08.156 { 00:18:08.156 "name": "BaseBdev2", 00:18:08.156 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:08.156 "is_configured": true, 00:18:08.156 "data_offset": 2048, 00:18:08.156 "data_size": 63488 00:18:08.156 }, 00:18:08.156 { 00:18:08.156 "name": "BaseBdev3", 00:18:08.156 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:08.156 "is_configured": true, 00:18:08.156 "data_offset": 2048, 00:18:08.156 "data_size": 63488 00:18:08.156 } 00:18:08.157 ] 00:18:08.157 }' 00:18:08.157 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.157 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.157 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.157 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.157 16:26:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.093 "name": "raid_bdev1", 00:18:09.093 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:09.093 "strip_size_kb": 64, 00:18:09.093 "state": "online", 00:18:09.093 "raid_level": "raid5f", 00:18:09.093 "superblock": true, 00:18:09.093 "num_base_bdevs": 3, 00:18:09.093 "num_base_bdevs_discovered": 3, 00:18:09.093 "num_base_bdevs_operational": 3, 00:18:09.093 "process": { 00:18:09.093 "type": "rebuild", 00:18:09.093 "target": "spare", 00:18:09.093 "progress": { 00:18:09.093 "blocks": 116736, 00:18:09.093 "percent": 91 00:18:09.093 } 00:18:09.093 }, 00:18:09.093 "base_bdevs_list": [ 00:18:09.093 { 00:18:09.093 "name": "spare", 00:18:09.093 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:09.093 "is_configured": true, 00:18:09.093 "data_offset": 2048, 00:18:09.093 "data_size": 63488 00:18:09.093 }, 00:18:09.093 { 00:18:09.093 "name": "BaseBdev2", 00:18:09.093 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:09.093 "is_configured": true, 00:18:09.093 "data_offset": 2048, 00:18:09.093 "data_size": 63488 00:18:09.093 }, 00:18:09.093 { 00:18:09.093 "name": "BaseBdev3", 00:18:09.093 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:09.093 "is_configured": true, 00:18:09.093 "data_offset": 2048, 00:18:09.093 "data_size": 63488 00:18:09.093 } 00:18:09.093 ] 00:18:09.093 }' 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.093 16:26:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.666 [2024-10-08 16:26:02.694884] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:09.666 [2024-10-08 16:26:02.695010] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:09.666 [2024-10-08 16:26:02.695200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.101 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.360 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.360 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.360 "name": "raid_bdev1", 00:18:10.360 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:10.360 "strip_size_kb": 64, 00:18:10.360 "state": "online", 00:18:10.360 "raid_level": "raid5f", 00:18:10.360 "superblock": true, 00:18:10.360 "num_base_bdevs": 3, 00:18:10.360 "num_base_bdevs_discovered": 3, 00:18:10.360 "num_base_bdevs_operational": 3, 00:18:10.360 "base_bdevs_list": [ 00:18:10.360 { 00:18:10.360 "name": "spare", 00:18:10.360 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:10.360 "is_configured": true, 00:18:10.360 "data_offset": 2048, 00:18:10.360 "data_size": 63488 00:18:10.360 }, 00:18:10.360 { 00:18:10.360 "name": "BaseBdev2", 00:18:10.360 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:10.360 "is_configured": true, 00:18:10.360 "data_offset": 2048, 00:18:10.360 "data_size": 63488 00:18:10.360 }, 00:18:10.360 { 00:18:10.360 "name": "BaseBdev3", 00:18:10.360 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:10.360 "is_configured": true, 00:18:10.360 "data_offset": 2048, 00:18:10.360 "data_size": 63488 00:18:10.360 } 00:18:10.360 ] 00:18:10.360 }' 00:18:10.360 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.360 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:10.360 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.360 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.361 "name": "raid_bdev1", 00:18:10.361 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:10.361 "strip_size_kb": 64, 00:18:10.361 "state": "online", 00:18:10.361 "raid_level": "raid5f", 00:18:10.361 "superblock": true, 00:18:10.361 "num_base_bdevs": 3, 00:18:10.361 "num_base_bdevs_discovered": 3, 00:18:10.361 "num_base_bdevs_operational": 3, 00:18:10.361 "base_bdevs_list": [ 00:18:10.361 { 00:18:10.361 "name": "spare", 00:18:10.361 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:10.361 "is_configured": true, 00:18:10.361 "data_offset": 2048, 00:18:10.361 "data_size": 63488 00:18:10.361 }, 00:18:10.361 { 00:18:10.361 "name": "BaseBdev2", 00:18:10.361 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:10.361 "is_configured": true, 00:18:10.361 "data_offset": 2048, 00:18:10.361 "data_size": 63488 00:18:10.361 }, 00:18:10.361 { 00:18:10.361 "name": "BaseBdev3", 00:18:10.361 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:10.361 "is_configured": true, 00:18:10.361 "data_offset": 2048, 00:18:10.361 "data_size": 63488 00:18:10.361 } 00:18:10.361 ] 00:18:10.361 }' 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.361 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.620 "name": "raid_bdev1", 00:18:10.620 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:10.620 "strip_size_kb": 64, 00:18:10.620 "state": "online", 00:18:10.620 "raid_level": "raid5f", 00:18:10.620 "superblock": true, 00:18:10.620 "num_base_bdevs": 3, 00:18:10.620 "num_base_bdevs_discovered": 3, 00:18:10.620 "num_base_bdevs_operational": 3, 00:18:10.620 "base_bdevs_list": [ 00:18:10.620 { 00:18:10.620 "name": "spare", 00:18:10.620 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:10.620 "is_configured": true, 00:18:10.620 "data_offset": 2048, 00:18:10.620 "data_size": 63488 00:18:10.620 }, 00:18:10.620 { 00:18:10.620 "name": "BaseBdev2", 00:18:10.620 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:10.620 "is_configured": true, 00:18:10.620 "data_offset": 2048, 00:18:10.620 "data_size": 63488 00:18:10.620 }, 00:18:10.620 { 00:18:10.620 "name": "BaseBdev3", 00:18:10.620 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:10.620 "is_configured": true, 00:18:10.620 "data_offset": 2048, 00:18:10.620 "data_size": 63488 00:18:10.620 } 00:18:10.620 ] 00:18:10.620 }' 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.620 16:26:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.187 [2024-10-08 16:26:04.233575] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.187 [2024-10-08 16:26:04.233632] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.187 [2024-10-08 16:26:04.233752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.187 [2024-10-08 16:26:04.233854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.187 [2024-10-08 16:26:04.233879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.187 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:11.445 /dev/nbd0 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.445 1+0 records in 00:18:11.445 1+0 records out 00:18:11.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638593 s, 6.4 MB/s 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.445 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:11.703 /dev/nbd1 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.703 1+0 records in 00:18:11.703 1+0 records out 00:18:11.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392695 s, 10.4 MB/s 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.703 16:26:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.960 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.219 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.476 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.735 [2024-10-08 16:26:05.803316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.735 [2024-10-08 16:26:05.803390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.735 [2024-10-08 16:26:05.803419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:12.735 [2024-10-08 16:26:05.803438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.735 [2024-10-08 16:26:05.806445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.735 [2024-10-08 16:26:05.806498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.735 [2024-10-08 16:26:05.806633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.735 [2024-10-08 16:26:05.806722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.735 [2024-10-08 16:26:05.806889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.735 [2024-10-08 16:26:05.807031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.735 spare 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.735 [2024-10-08 16:26:05.907160] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:12.735 [2024-10-08 16:26:05.907218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:12.735 [2024-10-08 16:26:05.907624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:12.735 [2024-10-08 16:26:05.912623] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:12.735 [2024-10-08 16:26:05.912822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:12.735 [2024-10-08 16:26:05.913126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.735 "name": "raid_bdev1", 00:18:12.735 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:12.735 "strip_size_kb": 64, 00:18:12.735 "state": "online", 00:18:12.735 "raid_level": "raid5f", 00:18:12.735 "superblock": true, 00:18:12.735 "num_base_bdevs": 3, 00:18:12.735 "num_base_bdevs_discovered": 3, 00:18:12.735 "num_base_bdevs_operational": 3, 00:18:12.735 "base_bdevs_list": [ 00:18:12.735 { 00:18:12.735 "name": "spare", 00:18:12.735 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:12.735 "is_configured": true, 00:18:12.735 "data_offset": 2048, 00:18:12.735 "data_size": 63488 00:18:12.735 }, 00:18:12.735 { 00:18:12.735 "name": "BaseBdev2", 00:18:12.735 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:12.735 "is_configured": true, 00:18:12.735 "data_offset": 2048, 00:18:12.735 "data_size": 63488 00:18:12.735 }, 00:18:12.735 { 00:18:12.735 "name": "BaseBdev3", 00:18:12.735 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:12.735 "is_configured": true, 00:18:12.735 "data_offset": 2048, 00:18:12.735 "data_size": 63488 00:18:12.735 } 00:18:12.735 ] 00:18:12.735 }' 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.735 16:26:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.301 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.301 "name": "raid_bdev1", 00:18:13.301 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:13.301 "strip_size_kb": 64, 00:18:13.301 "state": "online", 00:18:13.301 "raid_level": "raid5f", 00:18:13.301 "superblock": true, 00:18:13.301 "num_base_bdevs": 3, 00:18:13.301 "num_base_bdevs_discovered": 3, 00:18:13.301 "num_base_bdevs_operational": 3, 00:18:13.301 "base_bdevs_list": [ 00:18:13.301 { 00:18:13.301 "name": "spare", 00:18:13.301 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:13.301 "is_configured": true, 00:18:13.301 "data_offset": 2048, 00:18:13.301 "data_size": 63488 00:18:13.301 }, 00:18:13.301 { 00:18:13.301 "name": "BaseBdev2", 00:18:13.301 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:13.301 "is_configured": true, 00:18:13.301 "data_offset": 2048, 00:18:13.301 "data_size": 63488 00:18:13.301 }, 00:18:13.302 { 00:18:13.302 "name": "BaseBdev3", 00:18:13.302 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:13.302 "is_configured": true, 00:18:13.302 "data_offset": 2048, 00:18:13.302 "data_size": 63488 00:18:13.302 } 00:18:13.302 ] 00:18:13.302 }' 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:13.302 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.559 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.559 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:13.559 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.559 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.559 [2024-10-08 16:26:06.631025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.560 "name": "raid_bdev1", 00:18:13.560 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:13.560 "strip_size_kb": 64, 00:18:13.560 "state": "online", 00:18:13.560 "raid_level": "raid5f", 00:18:13.560 "superblock": true, 00:18:13.560 "num_base_bdevs": 3, 00:18:13.560 "num_base_bdevs_discovered": 2, 00:18:13.560 "num_base_bdevs_operational": 2, 00:18:13.560 "base_bdevs_list": [ 00:18:13.560 { 00:18:13.560 "name": null, 00:18:13.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.560 "is_configured": false, 00:18:13.560 "data_offset": 0, 00:18:13.560 "data_size": 63488 00:18:13.560 }, 00:18:13.560 { 00:18:13.560 "name": "BaseBdev2", 00:18:13.560 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:13.560 "is_configured": true, 00:18:13.560 "data_offset": 2048, 00:18:13.560 "data_size": 63488 00:18:13.560 }, 00:18:13.560 { 00:18:13.560 "name": "BaseBdev3", 00:18:13.560 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:13.560 "is_configured": true, 00:18:13.560 "data_offset": 2048, 00:18:13.560 "data_size": 63488 00:18:13.560 } 00:18:13.560 ] 00:18:13.560 }' 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.560 16:26:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.127 16:26:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.127 16:26:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.127 16:26:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.127 [2024-10-08 16:26:07.179215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.127 [2024-10-08 16:26:07.179473] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:14.127 [2024-10-08 16:26:07.179499] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:14.127 [2024-10-08 16:26:07.179579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.127 [2024-10-08 16:26:07.193316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:14.127 16:26:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.127 16:26:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:14.127 [2024-10-08 16:26:07.200587] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.062 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.062 "name": "raid_bdev1", 00:18:15.062 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:15.062 "strip_size_kb": 64, 00:18:15.062 "state": "online", 00:18:15.062 "raid_level": "raid5f", 00:18:15.062 "superblock": true, 00:18:15.062 "num_base_bdevs": 3, 00:18:15.062 "num_base_bdevs_discovered": 3, 00:18:15.062 "num_base_bdevs_operational": 3, 00:18:15.062 "process": { 00:18:15.062 "type": "rebuild", 00:18:15.062 "target": "spare", 00:18:15.062 "progress": { 00:18:15.062 "blocks": 18432, 00:18:15.062 "percent": 14 00:18:15.062 } 00:18:15.062 }, 00:18:15.062 "base_bdevs_list": [ 00:18:15.062 { 00:18:15.062 "name": "spare", 00:18:15.062 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:15.062 "is_configured": true, 00:18:15.062 "data_offset": 2048, 00:18:15.062 "data_size": 63488 00:18:15.062 }, 00:18:15.062 { 00:18:15.062 "name": "BaseBdev2", 00:18:15.062 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:15.062 "is_configured": true, 00:18:15.062 "data_offset": 2048, 00:18:15.062 "data_size": 63488 00:18:15.062 }, 00:18:15.062 { 00:18:15.062 "name": "BaseBdev3", 00:18:15.062 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:15.062 "is_configured": true, 00:18:15.062 "data_offset": 2048, 00:18:15.062 "data_size": 63488 00:18:15.062 } 00:18:15.063 ] 00:18:15.063 }' 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.063 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.063 [2024-10-08 16:26:08.367183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.320 [2024-10-08 16:26:08.415420] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.321 [2024-10-08 16:26:08.415502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.321 [2024-10-08 16:26:08.415542] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.321 [2024-10-08 16:26:08.415560] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.321 "name": "raid_bdev1", 00:18:15.321 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:15.321 "strip_size_kb": 64, 00:18:15.321 "state": "online", 00:18:15.321 "raid_level": "raid5f", 00:18:15.321 "superblock": true, 00:18:15.321 "num_base_bdevs": 3, 00:18:15.321 "num_base_bdevs_discovered": 2, 00:18:15.321 "num_base_bdevs_operational": 2, 00:18:15.321 "base_bdevs_list": [ 00:18:15.321 { 00:18:15.321 "name": null, 00:18:15.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.321 "is_configured": false, 00:18:15.321 "data_offset": 0, 00:18:15.321 "data_size": 63488 00:18:15.321 }, 00:18:15.321 { 00:18:15.321 "name": "BaseBdev2", 00:18:15.321 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:15.321 "is_configured": true, 00:18:15.321 "data_offset": 2048, 00:18:15.321 "data_size": 63488 00:18:15.321 }, 00:18:15.321 { 00:18:15.321 "name": "BaseBdev3", 00:18:15.321 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:15.321 "is_configured": true, 00:18:15.321 "data_offset": 2048, 00:18:15.321 "data_size": 63488 00:18:15.321 } 00:18:15.321 ] 00:18:15.321 }' 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.321 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.888 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.888 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.888 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.888 [2024-10-08 16:26:08.984241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.888 [2024-10-08 16:26:08.984469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.888 [2024-10-08 16:26:08.984563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:15.888 [2024-10-08 16:26:08.984597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.888 [2024-10-08 16:26:08.985232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.888 [2024-10-08 16:26:08.985273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.888 [2024-10-08 16:26:08.985388] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:15.888 [2024-10-08 16:26:08.985414] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.888 [2024-10-08 16:26:08.985429] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.888 [2024-10-08 16:26:08.985460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.888 [2024-10-08 16:26:08.998911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:15.888 spare 00:18:15.888 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.888 16:26:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:15.888 [2024-10-08 16:26:09.006210] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.824 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.824 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.824 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.825 "name": "raid_bdev1", 00:18:16.825 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:16.825 "strip_size_kb": 64, 00:18:16.825 "state": "online", 00:18:16.825 "raid_level": "raid5f", 00:18:16.825 "superblock": true, 00:18:16.825 "num_base_bdevs": 3, 00:18:16.825 "num_base_bdevs_discovered": 3, 00:18:16.825 "num_base_bdevs_operational": 3, 00:18:16.825 "process": { 00:18:16.825 "type": "rebuild", 00:18:16.825 "target": "spare", 00:18:16.825 "progress": { 00:18:16.825 "blocks": 18432, 00:18:16.825 "percent": 14 00:18:16.825 } 00:18:16.825 }, 00:18:16.825 "base_bdevs_list": [ 00:18:16.825 { 00:18:16.825 "name": "spare", 00:18:16.825 "uuid": "965fe691-d2ad-5df2-bc47-3917598098d4", 00:18:16.825 "is_configured": true, 00:18:16.825 "data_offset": 2048, 00:18:16.825 "data_size": 63488 00:18:16.825 }, 00:18:16.825 { 00:18:16.825 "name": "BaseBdev2", 00:18:16.825 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:16.825 "is_configured": true, 00:18:16.825 "data_offset": 2048, 00:18:16.825 "data_size": 63488 00:18:16.825 }, 00:18:16.825 { 00:18:16.825 "name": "BaseBdev3", 00:18:16.825 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:16.825 "is_configured": true, 00:18:16.825 "data_offset": 2048, 00:18:16.825 "data_size": 63488 00:18:16.825 } 00:18:16.825 ] 00:18:16.825 }' 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.825 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.085 [2024-10-08 16:26:10.156900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.085 [2024-10-08 16:26:10.221183] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:17.085 [2024-10-08 16:26:10.221301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.085 [2024-10-08 16:26:10.221333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.085 [2024-10-08 16:26:10.221346] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.085 "name": "raid_bdev1", 00:18:17.085 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:17.085 "strip_size_kb": 64, 00:18:17.085 "state": "online", 00:18:17.085 "raid_level": "raid5f", 00:18:17.085 "superblock": true, 00:18:17.085 "num_base_bdevs": 3, 00:18:17.085 "num_base_bdevs_discovered": 2, 00:18:17.085 "num_base_bdevs_operational": 2, 00:18:17.085 "base_bdevs_list": [ 00:18:17.085 { 00:18:17.085 "name": null, 00:18:17.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.085 "is_configured": false, 00:18:17.085 "data_offset": 0, 00:18:17.085 "data_size": 63488 00:18:17.085 }, 00:18:17.085 { 00:18:17.085 "name": "BaseBdev2", 00:18:17.085 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:17.085 "is_configured": true, 00:18:17.085 "data_offset": 2048, 00:18:17.085 "data_size": 63488 00:18:17.085 }, 00:18:17.085 { 00:18:17.085 "name": "BaseBdev3", 00:18:17.085 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:17.085 "is_configured": true, 00:18:17.085 "data_offset": 2048, 00:18:17.085 "data_size": 63488 00:18:17.085 } 00:18:17.085 ] 00:18:17.085 }' 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.085 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.653 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.653 "name": "raid_bdev1", 00:18:17.653 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:17.654 "strip_size_kb": 64, 00:18:17.654 "state": "online", 00:18:17.654 "raid_level": "raid5f", 00:18:17.654 "superblock": true, 00:18:17.654 "num_base_bdevs": 3, 00:18:17.654 "num_base_bdevs_discovered": 2, 00:18:17.654 "num_base_bdevs_operational": 2, 00:18:17.654 "base_bdevs_list": [ 00:18:17.654 { 00:18:17.654 "name": null, 00:18:17.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.654 "is_configured": false, 00:18:17.654 "data_offset": 0, 00:18:17.654 "data_size": 63488 00:18:17.654 }, 00:18:17.654 { 00:18:17.654 "name": "BaseBdev2", 00:18:17.654 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:17.654 "is_configured": true, 00:18:17.654 "data_offset": 2048, 00:18:17.654 "data_size": 63488 00:18:17.654 }, 00:18:17.654 { 00:18:17.654 "name": "BaseBdev3", 00:18:17.654 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:17.654 "is_configured": true, 00:18:17.654 "data_offset": 2048, 00:18:17.654 "data_size": 63488 00:18:17.654 } 00:18:17.654 ] 00:18:17.654 }' 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.654 [2024-10-08 16:26:10.963246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.654 [2024-10-08 16:26:10.963332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.654 [2024-10-08 16:26:10.963367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:17.654 [2024-10-08 16:26:10.963382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.654 [2024-10-08 16:26:10.963930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.654 [2024-10-08 16:26:10.963962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.654 [2024-10-08 16:26:10.964074] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:17.654 [2024-10-08 16:26:10.964096] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.654 [2024-10-08 16:26:10.964117] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:17.654 [2024-10-08 16:26:10.964130] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:17.654 BaseBdev1 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.654 16:26:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.031 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.032 16:26:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.032 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.032 "name": "raid_bdev1", 00:18:19.032 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:19.032 "strip_size_kb": 64, 00:18:19.032 "state": "online", 00:18:19.032 "raid_level": "raid5f", 00:18:19.032 "superblock": true, 00:18:19.032 "num_base_bdevs": 3, 00:18:19.032 "num_base_bdevs_discovered": 2, 00:18:19.032 "num_base_bdevs_operational": 2, 00:18:19.032 "base_bdevs_list": [ 00:18:19.032 { 00:18:19.032 "name": null, 00:18:19.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.032 "is_configured": false, 00:18:19.032 "data_offset": 0, 00:18:19.032 "data_size": 63488 00:18:19.032 }, 00:18:19.032 { 00:18:19.032 "name": "BaseBdev2", 00:18:19.032 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:19.032 "is_configured": true, 00:18:19.032 "data_offset": 2048, 00:18:19.032 "data_size": 63488 00:18:19.032 }, 00:18:19.032 { 00:18:19.032 "name": "BaseBdev3", 00:18:19.032 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:19.032 "is_configured": true, 00:18:19.032 "data_offset": 2048, 00:18:19.032 "data_size": 63488 00:18:19.032 } 00:18:19.032 ] 00:18:19.032 }' 00:18:19.032 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.032 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.291 "name": "raid_bdev1", 00:18:19.291 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:19.291 "strip_size_kb": 64, 00:18:19.291 "state": "online", 00:18:19.291 "raid_level": "raid5f", 00:18:19.291 "superblock": true, 00:18:19.291 "num_base_bdevs": 3, 00:18:19.291 "num_base_bdevs_discovered": 2, 00:18:19.291 "num_base_bdevs_operational": 2, 00:18:19.291 "base_bdevs_list": [ 00:18:19.291 { 00:18:19.291 "name": null, 00:18:19.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.291 "is_configured": false, 00:18:19.291 "data_offset": 0, 00:18:19.291 "data_size": 63488 00:18:19.291 }, 00:18:19.291 { 00:18:19.291 "name": "BaseBdev2", 00:18:19.291 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:19.291 "is_configured": true, 00:18:19.291 "data_offset": 2048, 00:18:19.291 "data_size": 63488 00:18:19.291 }, 00:18:19.291 { 00:18:19.291 "name": "BaseBdev3", 00:18:19.291 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:19.291 "is_configured": true, 00:18:19.291 "data_offset": 2048, 00:18:19.291 "data_size": 63488 00:18:19.291 } 00:18:19.291 ] 00:18:19.291 }' 00:18:19.291 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.550 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.550 [2024-10-08 16:26:12.691844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.550 [2024-10-08 16:26:12.692080] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.550 [2024-10-08 16:26:12.692107] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:19.550 request: 00:18:19.550 { 00:18:19.550 "base_bdev": "BaseBdev1", 00:18:19.550 "raid_bdev": "raid_bdev1", 00:18:19.550 "method": "bdev_raid_add_base_bdev", 00:18:19.550 "req_id": 1 00:18:19.550 } 00:18:19.550 Got JSON-RPC error response 00:18:19.550 response: 00:18:19.550 { 00:18:19.550 "code": -22, 00:18:19.550 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:19.551 } 00:18:19.551 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.551 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:19.551 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.551 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.551 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.551 16:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.516 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.516 "name": "raid_bdev1", 00:18:20.516 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:20.516 "strip_size_kb": 64, 00:18:20.517 "state": "online", 00:18:20.517 "raid_level": "raid5f", 00:18:20.517 "superblock": true, 00:18:20.517 "num_base_bdevs": 3, 00:18:20.517 "num_base_bdevs_discovered": 2, 00:18:20.517 "num_base_bdevs_operational": 2, 00:18:20.517 "base_bdevs_list": [ 00:18:20.517 { 00:18:20.517 "name": null, 00:18:20.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.517 "is_configured": false, 00:18:20.517 "data_offset": 0, 00:18:20.517 "data_size": 63488 00:18:20.517 }, 00:18:20.517 { 00:18:20.517 "name": "BaseBdev2", 00:18:20.517 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:20.517 "is_configured": true, 00:18:20.517 "data_offset": 2048, 00:18:20.517 "data_size": 63488 00:18:20.517 }, 00:18:20.517 { 00:18:20.517 "name": "BaseBdev3", 00:18:20.517 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:20.517 "is_configured": true, 00:18:20.517 "data_offset": 2048, 00:18:20.517 "data_size": 63488 00:18:20.517 } 00:18:20.517 ] 00:18:20.517 }' 00:18:20.517 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.517 16:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.085 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.085 "name": "raid_bdev1", 00:18:21.085 "uuid": "317eb8b5-8fe2-46c9-a501-fa9008a9df89", 00:18:21.085 "strip_size_kb": 64, 00:18:21.085 "state": "online", 00:18:21.085 "raid_level": "raid5f", 00:18:21.085 "superblock": true, 00:18:21.085 "num_base_bdevs": 3, 00:18:21.085 "num_base_bdevs_discovered": 2, 00:18:21.085 "num_base_bdevs_operational": 2, 00:18:21.085 "base_bdevs_list": [ 00:18:21.085 { 00:18:21.085 "name": null, 00:18:21.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.085 "is_configured": false, 00:18:21.085 "data_offset": 0, 00:18:21.085 "data_size": 63488 00:18:21.085 }, 00:18:21.085 { 00:18:21.085 "name": "BaseBdev2", 00:18:21.085 "uuid": "bac4ef5a-7891-543e-a34f-7b2b32af2e34", 00:18:21.085 "is_configured": true, 00:18:21.085 "data_offset": 2048, 00:18:21.085 "data_size": 63488 00:18:21.085 }, 00:18:21.085 { 00:18:21.085 "name": "BaseBdev3", 00:18:21.085 "uuid": "9310c948-a739-5018-ae1c-34995377072d", 00:18:21.085 "is_configured": true, 00:18:21.086 "data_offset": 2048, 00:18:21.086 "data_size": 63488 00:18:21.086 } 00:18:21.086 ] 00:18:21.086 }' 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82765 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82765 ']' 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82765 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.086 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82765 00:18:21.344 killing process with pid 82765 00:18:21.344 Received shutdown signal, test time was about 60.000000 seconds 00:18:21.344 00:18:21.344 Latency(us) 00:18:21.344 [2024-10-08T16:26:14.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.344 [2024-10-08T16:26:14.666Z] =================================================================================================================== 00:18:21.344 [2024-10-08T16:26:14.666Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.344 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.344 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.344 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82765' 00:18:21.344 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82765 00:18:21.344 [2024-10-08 16:26:14.412402] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.344 16:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82765 00:18:21.344 [2024-10-08 16:26:14.412596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.344 [2024-10-08 16:26:14.412684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.344 [2024-10-08 16:26:14.412705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:21.602 [2024-10-08 16:26:14.772936] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.997 ************************************ 00:18:22.997 END TEST raid5f_rebuild_test_sb 00:18:22.997 ************************************ 00:18:22.997 16:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:22.997 00:18:22.997 real 0m25.243s 00:18:22.997 user 0m33.356s 00:18:22.997 sys 0m2.841s 00:18:22.997 16:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.997 16:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.997 16:26:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:22.997 16:26:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:22.997 16:26:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:22.997 16:26:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.997 16:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.997 ************************************ 00:18:22.997 START TEST raid5f_state_function_test 00:18:22.997 ************************************ 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83533 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83533' 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:22.997 Process raid pid: 83533 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83533 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83533 ']' 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.997 16:26:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.997 [2024-10-08 16:26:16.159881] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:22.997 [2024-10-08 16:26:16.161073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.256 [2024-10-08 16:26:16.342292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.513 [2024-10-08 16:26:16.637941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.772 [2024-10-08 16:26:16.851279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.772 [2024-10-08 16:26:16.851334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.031 [2024-10-08 16:26:17.214842] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.031 [2024-10-08 16:26:17.214924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.031 [2024-10-08 16:26:17.214952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.031 [2024-10-08 16:26:17.214979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.031 [2024-10-08 16:26:17.214995] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.031 [2024-10-08 16:26:17.215017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.031 [2024-10-08 16:26:17.215032] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:24.031 [2024-10-08 16:26:17.215058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.031 "name": "Existed_Raid", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "strip_size_kb": 64, 00:18:24.031 "state": "configuring", 00:18:24.031 "raid_level": "raid5f", 00:18:24.031 "superblock": false, 00:18:24.031 "num_base_bdevs": 4, 00:18:24.031 "num_base_bdevs_discovered": 0, 00:18:24.031 "num_base_bdevs_operational": 4, 00:18:24.031 "base_bdevs_list": [ 00:18:24.031 { 00:18:24.031 "name": "BaseBdev1", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 }, 00:18:24.031 { 00:18:24.031 "name": "BaseBdev2", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 }, 00:18:24.031 { 00:18:24.031 "name": "BaseBdev3", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 }, 00:18:24.031 { 00:18:24.031 "name": "BaseBdev4", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 } 00:18:24.031 ] 00:18:24.031 }' 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.031 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.599 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:24.599 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.599 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.599 [2024-10-08 16:26:17.734893] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.599 [2024-10-08 16:26:17.734979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:24.599 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.599 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:24.599 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.600 [2024-10-08 16:26:17.747097] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.600 [2024-10-08 16:26:17.747318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.600 [2024-10-08 16:26:17.747455] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.600 [2024-10-08 16:26:17.747649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.600 [2024-10-08 16:26:17.747764] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.600 [2024-10-08 16:26:17.747907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.600 [2024-10-08 16:26:17.748044] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:24.600 [2024-10-08 16:26:17.748179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.600 [2024-10-08 16:26:17.805007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.600 BaseBdev1 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.600 [ 00:18:24.600 { 00:18:24.600 "name": "BaseBdev1", 00:18:24.600 "aliases": [ 00:18:24.600 "648ecad5-e4b9-4f7c-9f0d-0c446c63e057" 00:18:24.600 ], 00:18:24.600 "product_name": "Malloc disk", 00:18:24.600 "block_size": 512, 00:18:24.600 "num_blocks": 65536, 00:18:24.600 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:24.600 "assigned_rate_limits": { 00:18:24.600 "rw_ios_per_sec": 0, 00:18:24.600 "rw_mbytes_per_sec": 0, 00:18:24.600 "r_mbytes_per_sec": 0, 00:18:24.600 "w_mbytes_per_sec": 0 00:18:24.600 }, 00:18:24.600 "claimed": true, 00:18:24.600 "claim_type": "exclusive_write", 00:18:24.600 "zoned": false, 00:18:24.600 "supported_io_types": { 00:18:24.600 "read": true, 00:18:24.600 "write": true, 00:18:24.600 "unmap": true, 00:18:24.600 "flush": true, 00:18:24.600 "reset": true, 00:18:24.600 "nvme_admin": false, 00:18:24.600 "nvme_io": false, 00:18:24.600 "nvme_io_md": false, 00:18:24.600 "write_zeroes": true, 00:18:24.600 "zcopy": true, 00:18:24.600 "get_zone_info": false, 00:18:24.600 "zone_management": false, 00:18:24.600 "zone_append": false, 00:18:24.600 "compare": false, 00:18:24.600 "compare_and_write": false, 00:18:24.600 "abort": true, 00:18:24.600 "seek_hole": false, 00:18:24.600 "seek_data": false, 00:18:24.600 "copy": true, 00:18:24.600 "nvme_iov_md": false 00:18:24.600 }, 00:18:24.600 "memory_domains": [ 00:18:24.600 { 00:18:24.600 "dma_device_id": "system", 00:18:24.600 "dma_device_type": 1 00:18:24.600 }, 00:18:24.600 { 00:18:24.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.600 "dma_device_type": 2 00:18:24.600 } 00:18:24.600 ], 00:18:24.600 "driver_specific": {} 00:18:24.600 } 00:18:24.600 ] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.600 "name": "Existed_Raid", 00:18:24.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.600 "strip_size_kb": 64, 00:18:24.600 "state": "configuring", 00:18:24.600 "raid_level": "raid5f", 00:18:24.600 "superblock": false, 00:18:24.600 "num_base_bdevs": 4, 00:18:24.600 "num_base_bdevs_discovered": 1, 00:18:24.600 "num_base_bdevs_operational": 4, 00:18:24.600 "base_bdevs_list": [ 00:18:24.600 { 00:18:24.600 "name": "BaseBdev1", 00:18:24.600 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:24.600 "is_configured": true, 00:18:24.600 "data_offset": 0, 00:18:24.600 "data_size": 65536 00:18:24.600 }, 00:18:24.600 { 00:18:24.600 "name": "BaseBdev2", 00:18:24.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.600 "is_configured": false, 00:18:24.600 "data_offset": 0, 00:18:24.600 "data_size": 0 00:18:24.600 }, 00:18:24.600 { 00:18:24.600 "name": "BaseBdev3", 00:18:24.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.600 "is_configured": false, 00:18:24.600 "data_offset": 0, 00:18:24.600 "data_size": 0 00:18:24.600 }, 00:18:24.600 { 00:18:24.600 "name": "BaseBdev4", 00:18:24.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.600 "is_configured": false, 00:18:24.600 "data_offset": 0, 00:18:24.600 "data_size": 0 00:18:24.600 } 00:18:24.600 ] 00:18:24.600 }' 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.600 16:26:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.171 [2024-10-08 16:26:18.377258] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.171 [2024-10-08 16:26:18.377571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.171 [2024-10-08 16:26:18.385301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.171 [2024-10-08 16:26:18.388019] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.171 [2024-10-08 16:26:18.388083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.171 [2024-10-08 16:26:18.388100] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.171 [2024-10-08 16:26:18.388118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.171 [2024-10-08 16:26:18.388128] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:25.171 [2024-10-08 16:26:18.388142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.171 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.171 "name": "Existed_Raid", 00:18:25.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.171 "strip_size_kb": 64, 00:18:25.171 "state": "configuring", 00:18:25.171 "raid_level": "raid5f", 00:18:25.171 "superblock": false, 00:18:25.171 "num_base_bdevs": 4, 00:18:25.171 "num_base_bdevs_discovered": 1, 00:18:25.171 "num_base_bdevs_operational": 4, 00:18:25.171 "base_bdevs_list": [ 00:18:25.171 { 00:18:25.171 "name": "BaseBdev1", 00:18:25.171 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:25.171 "is_configured": true, 00:18:25.171 "data_offset": 0, 00:18:25.171 "data_size": 65536 00:18:25.171 }, 00:18:25.171 { 00:18:25.171 "name": "BaseBdev2", 00:18:25.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.171 "is_configured": false, 00:18:25.171 "data_offset": 0, 00:18:25.171 "data_size": 0 00:18:25.171 }, 00:18:25.171 { 00:18:25.171 "name": "BaseBdev3", 00:18:25.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.171 "is_configured": false, 00:18:25.171 "data_offset": 0, 00:18:25.171 "data_size": 0 00:18:25.172 }, 00:18:25.172 { 00:18:25.172 "name": "BaseBdev4", 00:18:25.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.172 "is_configured": false, 00:18:25.172 "data_offset": 0, 00:18:25.172 "data_size": 0 00:18:25.172 } 00:18:25.172 ] 00:18:25.172 }' 00:18:25.172 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.172 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.743 [2024-10-08 16:26:18.973446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.743 BaseBdev2 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.743 16:26:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.743 [ 00:18:25.743 { 00:18:25.743 "name": "BaseBdev2", 00:18:25.743 "aliases": [ 00:18:25.743 "286c315b-e99e-4824-84d4-0a54ec7b4ebe" 00:18:25.743 ], 00:18:25.743 "product_name": "Malloc disk", 00:18:25.743 "block_size": 512, 00:18:25.743 "num_blocks": 65536, 00:18:25.743 "uuid": "286c315b-e99e-4824-84d4-0a54ec7b4ebe", 00:18:25.743 "assigned_rate_limits": { 00:18:25.743 "rw_ios_per_sec": 0, 00:18:25.743 "rw_mbytes_per_sec": 0, 00:18:25.743 "r_mbytes_per_sec": 0, 00:18:25.743 "w_mbytes_per_sec": 0 00:18:25.743 }, 00:18:25.743 "claimed": true, 00:18:25.743 "claim_type": "exclusive_write", 00:18:25.743 "zoned": false, 00:18:25.743 "supported_io_types": { 00:18:25.743 "read": true, 00:18:25.743 "write": true, 00:18:25.743 "unmap": true, 00:18:25.743 "flush": true, 00:18:25.743 "reset": true, 00:18:25.743 "nvme_admin": false, 00:18:25.743 "nvme_io": false, 00:18:25.743 "nvme_io_md": false, 00:18:25.743 "write_zeroes": true, 00:18:25.743 "zcopy": true, 00:18:25.743 "get_zone_info": false, 00:18:25.743 "zone_management": false, 00:18:25.743 "zone_append": false, 00:18:25.743 "compare": false, 00:18:25.743 "compare_and_write": false, 00:18:25.743 "abort": true, 00:18:25.743 "seek_hole": false, 00:18:25.743 "seek_data": false, 00:18:25.743 "copy": true, 00:18:25.743 "nvme_iov_md": false 00:18:25.743 }, 00:18:25.743 "memory_domains": [ 00:18:25.743 { 00:18:25.743 "dma_device_id": "system", 00:18:25.743 "dma_device_type": 1 00:18:25.743 }, 00:18:25.743 { 00:18:25.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.743 "dma_device_type": 2 00:18:25.743 } 00:18:25.743 ], 00:18:25.743 "driver_specific": {} 00:18:25.743 } 00:18:25.743 ] 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.743 "name": "Existed_Raid", 00:18:25.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.743 "strip_size_kb": 64, 00:18:25.743 "state": "configuring", 00:18:25.743 "raid_level": "raid5f", 00:18:25.743 "superblock": false, 00:18:25.743 "num_base_bdevs": 4, 00:18:25.743 "num_base_bdevs_discovered": 2, 00:18:25.743 "num_base_bdevs_operational": 4, 00:18:25.743 "base_bdevs_list": [ 00:18:25.743 { 00:18:25.743 "name": "BaseBdev1", 00:18:25.743 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:25.743 "is_configured": true, 00:18:25.743 "data_offset": 0, 00:18:25.743 "data_size": 65536 00:18:25.743 }, 00:18:25.743 { 00:18:25.743 "name": "BaseBdev2", 00:18:25.743 "uuid": "286c315b-e99e-4824-84d4-0a54ec7b4ebe", 00:18:25.743 "is_configured": true, 00:18:25.743 "data_offset": 0, 00:18:25.743 "data_size": 65536 00:18:25.743 }, 00:18:25.743 { 00:18:25.743 "name": "BaseBdev3", 00:18:25.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.743 "is_configured": false, 00:18:25.743 "data_offset": 0, 00:18:25.743 "data_size": 0 00:18:25.743 }, 00:18:25.743 { 00:18:25.743 "name": "BaseBdev4", 00:18:25.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.743 "is_configured": false, 00:18:25.743 "data_offset": 0, 00:18:25.743 "data_size": 0 00:18:25.743 } 00:18:25.743 ] 00:18:25.743 }' 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.743 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.310 [2024-10-08 16:26:19.576833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.310 BaseBdev3 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.310 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.310 [ 00:18:26.310 { 00:18:26.310 "name": "BaseBdev3", 00:18:26.310 "aliases": [ 00:18:26.310 "60257a31-747c-47cd-8b28-f1de50a203f2" 00:18:26.310 ], 00:18:26.310 "product_name": "Malloc disk", 00:18:26.310 "block_size": 512, 00:18:26.310 "num_blocks": 65536, 00:18:26.310 "uuid": "60257a31-747c-47cd-8b28-f1de50a203f2", 00:18:26.311 "assigned_rate_limits": { 00:18:26.311 "rw_ios_per_sec": 0, 00:18:26.311 "rw_mbytes_per_sec": 0, 00:18:26.311 "r_mbytes_per_sec": 0, 00:18:26.311 "w_mbytes_per_sec": 0 00:18:26.311 }, 00:18:26.311 "claimed": true, 00:18:26.311 "claim_type": "exclusive_write", 00:18:26.311 "zoned": false, 00:18:26.311 "supported_io_types": { 00:18:26.311 "read": true, 00:18:26.311 "write": true, 00:18:26.311 "unmap": true, 00:18:26.311 "flush": true, 00:18:26.311 "reset": true, 00:18:26.311 "nvme_admin": false, 00:18:26.311 "nvme_io": false, 00:18:26.311 "nvme_io_md": false, 00:18:26.311 "write_zeroes": true, 00:18:26.311 "zcopy": true, 00:18:26.311 "get_zone_info": false, 00:18:26.311 "zone_management": false, 00:18:26.311 "zone_append": false, 00:18:26.311 "compare": false, 00:18:26.311 "compare_and_write": false, 00:18:26.311 "abort": true, 00:18:26.311 "seek_hole": false, 00:18:26.311 "seek_data": false, 00:18:26.311 "copy": true, 00:18:26.311 "nvme_iov_md": false 00:18:26.311 }, 00:18:26.311 "memory_domains": [ 00:18:26.311 { 00:18:26.311 "dma_device_id": "system", 00:18:26.311 "dma_device_type": 1 00:18:26.311 }, 00:18:26.311 { 00:18:26.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.311 "dma_device_type": 2 00:18:26.311 } 00:18:26.311 ], 00:18:26.311 "driver_specific": {} 00:18:26.311 } 00:18:26.311 ] 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.311 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.570 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.570 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.570 "name": "Existed_Raid", 00:18:26.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.570 "strip_size_kb": 64, 00:18:26.570 "state": "configuring", 00:18:26.570 "raid_level": "raid5f", 00:18:26.570 "superblock": false, 00:18:26.570 "num_base_bdevs": 4, 00:18:26.570 "num_base_bdevs_discovered": 3, 00:18:26.570 "num_base_bdevs_operational": 4, 00:18:26.570 "base_bdevs_list": [ 00:18:26.570 { 00:18:26.570 "name": "BaseBdev1", 00:18:26.570 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:26.570 "is_configured": true, 00:18:26.570 "data_offset": 0, 00:18:26.570 "data_size": 65536 00:18:26.570 }, 00:18:26.570 { 00:18:26.570 "name": "BaseBdev2", 00:18:26.570 "uuid": "286c315b-e99e-4824-84d4-0a54ec7b4ebe", 00:18:26.570 "is_configured": true, 00:18:26.570 "data_offset": 0, 00:18:26.570 "data_size": 65536 00:18:26.570 }, 00:18:26.570 { 00:18:26.570 "name": "BaseBdev3", 00:18:26.570 "uuid": "60257a31-747c-47cd-8b28-f1de50a203f2", 00:18:26.570 "is_configured": true, 00:18:26.570 "data_offset": 0, 00:18:26.570 "data_size": 65536 00:18:26.570 }, 00:18:26.570 { 00:18:26.570 "name": "BaseBdev4", 00:18:26.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.570 "is_configured": false, 00:18:26.570 "data_offset": 0, 00:18:26.570 "data_size": 0 00:18:26.570 } 00:18:26.570 ] 00:18:26.570 }' 00:18:26.570 16:26:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.570 16:26:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.828 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:26.828 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.828 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.828 [2024-10-08 16:26:20.147262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:26.828 [2024-10-08 16:26:20.147641] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:26.828 [2024-10-08 16:26:20.147686] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:26.828 [2024-10-08 16:26:20.148086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:27.087 [2024-10-08 16:26:20.154941] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:27.087 [2024-10-08 16:26:20.155169] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:27.087 [2024-10-08 16:26:20.155613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.087 BaseBdev4 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.087 [ 00:18:27.087 { 00:18:27.087 "name": "BaseBdev4", 00:18:27.087 "aliases": [ 00:18:27.087 "0ac6dfac-8366-4d44-9d32-9a8a6880769e" 00:18:27.087 ], 00:18:27.087 "product_name": "Malloc disk", 00:18:27.087 "block_size": 512, 00:18:27.087 "num_blocks": 65536, 00:18:27.087 "uuid": "0ac6dfac-8366-4d44-9d32-9a8a6880769e", 00:18:27.087 "assigned_rate_limits": { 00:18:27.087 "rw_ios_per_sec": 0, 00:18:27.087 "rw_mbytes_per_sec": 0, 00:18:27.087 "r_mbytes_per_sec": 0, 00:18:27.087 "w_mbytes_per_sec": 0 00:18:27.087 }, 00:18:27.087 "claimed": true, 00:18:27.087 "claim_type": "exclusive_write", 00:18:27.087 "zoned": false, 00:18:27.087 "supported_io_types": { 00:18:27.087 "read": true, 00:18:27.087 "write": true, 00:18:27.087 "unmap": true, 00:18:27.087 "flush": true, 00:18:27.087 "reset": true, 00:18:27.087 "nvme_admin": false, 00:18:27.087 "nvme_io": false, 00:18:27.087 "nvme_io_md": false, 00:18:27.087 "write_zeroes": true, 00:18:27.087 "zcopy": true, 00:18:27.087 "get_zone_info": false, 00:18:27.087 "zone_management": false, 00:18:27.087 "zone_append": false, 00:18:27.087 "compare": false, 00:18:27.087 "compare_and_write": false, 00:18:27.087 "abort": true, 00:18:27.087 "seek_hole": false, 00:18:27.087 "seek_data": false, 00:18:27.087 "copy": true, 00:18:27.087 "nvme_iov_md": false 00:18:27.087 }, 00:18:27.087 "memory_domains": [ 00:18:27.087 { 00:18:27.087 "dma_device_id": "system", 00:18:27.087 "dma_device_type": 1 00:18:27.087 }, 00:18:27.087 { 00:18:27.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.087 "dma_device_type": 2 00:18:27.087 } 00:18:27.087 ], 00:18:27.087 "driver_specific": {} 00:18:27.087 } 00:18:27.087 ] 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.087 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.087 "name": "Existed_Raid", 00:18:27.087 "uuid": "009ac450-365b-42f1-8b46-6682d2adbdc0", 00:18:27.087 "strip_size_kb": 64, 00:18:27.087 "state": "online", 00:18:27.087 "raid_level": "raid5f", 00:18:27.087 "superblock": false, 00:18:27.087 "num_base_bdevs": 4, 00:18:27.087 "num_base_bdevs_discovered": 4, 00:18:27.087 "num_base_bdevs_operational": 4, 00:18:27.088 "base_bdevs_list": [ 00:18:27.088 { 00:18:27.088 "name": "BaseBdev1", 00:18:27.088 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:27.088 "is_configured": true, 00:18:27.088 "data_offset": 0, 00:18:27.088 "data_size": 65536 00:18:27.088 }, 00:18:27.088 { 00:18:27.088 "name": "BaseBdev2", 00:18:27.088 "uuid": "286c315b-e99e-4824-84d4-0a54ec7b4ebe", 00:18:27.088 "is_configured": true, 00:18:27.088 "data_offset": 0, 00:18:27.088 "data_size": 65536 00:18:27.088 }, 00:18:27.088 { 00:18:27.088 "name": "BaseBdev3", 00:18:27.088 "uuid": "60257a31-747c-47cd-8b28-f1de50a203f2", 00:18:27.088 "is_configured": true, 00:18:27.088 "data_offset": 0, 00:18:27.088 "data_size": 65536 00:18:27.088 }, 00:18:27.088 { 00:18:27.088 "name": "BaseBdev4", 00:18:27.088 "uuid": "0ac6dfac-8366-4d44-9d32-9a8a6880769e", 00:18:27.088 "is_configured": true, 00:18:27.088 "data_offset": 0, 00:18:27.088 "data_size": 65536 00:18:27.088 } 00:18:27.088 ] 00:18:27.088 }' 00:18:27.088 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.088 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.655 [2024-10-08 16:26:20.723176] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.655 "name": "Existed_Raid", 00:18:27.655 "aliases": [ 00:18:27.655 "009ac450-365b-42f1-8b46-6682d2adbdc0" 00:18:27.655 ], 00:18:27.655 "product_name": "Raid Volume", 00:18:27.655 "block_size": 512, 00:18:27.655 "num_blocks": 196608, 00:18:27.655 "uuid": "009ac450-365b-42f1-8b46-6682d2adbdc0", 00:18:27.655 "assigned_rate_limits": { 00:18:27.655 "rw_ios_per_sec": 0, 00:18:27.655 "rw_mbytes_per_sec": 0, 00:18:27.655 "r_mbytes_per_sec": 0, 00:18:27.655 "w_mbytes_per_sec": 0 00:18:27.655 }, 00:18:27.655 "claimed": false, 00:18:27.655 "zoned": false, 00:18:27.655 "supported_io_types": { 00:18:27.655 "read": true, 00:18:27.655 "write": true, 00:18:27.655 "unmap": false, 00:18:27.655 "flush": false, 00:18:27.655 "reset": true, 00:18:27.655 "nvme_admin": false, 00:18:27.655 "nvme_io": false, 00:18:27.655 "nvme_io_md": false, 00:18:27.655 "write_zeroes": true, 00:18:27.655 "zcopy": false, 00:18:27.655 "get_zone_info": false, 00:18:27.655 "zone_management": false, 00:18:27.655 "zone_append": false, 00:18:27.655 "compare": false, 00:18:27.655 "compare_and_write": false, 00:18:27.655 "abort": false, 00:18:27.655 "seek_hole": false, 00:18:27.655 "seek_data": false, 00:18:27.655 "copy": false, 00:18:27.655 "nvme_iov_md": false 00:18:27.655 }, 00:18:27.655 "driver_specific": { 00:18:27.655 "raid": { 00:18:27.655 "uuid": "009ac450-365b-42f1-8b46-6682d2adbdc0", 00:18:27.655 "strip_size_kb": 64, 00:18:27.655 "state": "online", 00:18:27.655 "raid_level": "raid5f", 00:18:27.655 "superblock": false, 00:18:27.655 "num_base_bdevs": 4, 00:18:27.655 "num_base_bdevs_discovered": 4, 00:18:27.655 "num_base_bdevs_operational": 4, 00:18:27.655 "base_bdevs_list": [ 00:18:27.655 { 00:18:27.655 "name": "BaseBdev1", 00:18:27.655 "uuid": "648ecad5-e4b9-4f7c-9f0d-0c446c63e057", 00:18:27.655 "is_configured": true, 00:18:27.655 "data_offset": 0, 00:18:27.655 "data_size": 65536 00:18:27.655 }, 00:18:27.655 { 00:18:27.655 "name": "BaseBdev2", 00:18:27.655 "uuid": "286c315b-e99e-4824-84d4-0a54ec7b4ebe", 00:18:27.655 "is_configured": true, 00:18:27.655 "data_offset": 0, 00:18:27.655 "data_size": 65536 00:18:27.655 }, 00:18:27.655 { 00:18:27.655 "name": "BaseBdev3", 00:18:27.655 "uuid": "60257a31-747c-47cd-8b28-f1de50a203f2", 00:18:27.655 "is_configured": true, 00:18:27.655 "data_offset": 0, 00:18:27.655 "data_size": 65536 00:18:27.655 }, 00:18:27.655 { 00:18:27.655 "name": "BaseBdev4", 00:18:27.655 "uuid": "0ac6dfac-8366-4d44-9d32-9a8a6880769e", 00:18:27.655 "is_configured": true, 00:18:27.655 "data_offset": 0, 00:18:27.655 "data_size": 65536 00:18:27.655 } 00:18:27.655 ] 00:18:27.655 } 00:18:27.655 } 00:18:27.655 }' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:27.655 BaseBdev2 00:18:27.655 BaseBdev3 00:18:27.655 BaseBdev4' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.655 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.656 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.656 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.656 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:27.656 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.656 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.656 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.914 16:26:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.914 [2024-10-08 16:26:21.099081] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.914 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.172 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.172 "name": "Existed_Raid", 00:18:28.172 "uuid": "009ac450-365b-42f1-8b46-6682d2adbdc0", 00:18:28.172 "strip_size_kb": 64, 00:18:28.172 "state": "online", 00:18:28.172 "raid_level": "raid5f", 00:18:28.172 "superblock": false, 00:18:28.172 "num_base_bdevs": 4, 00:18:28.172 "num_base_bdevs_discovered": 3, 00:18:28.172 "num_base_bdevs_operational": 3, 00:18:28.172 "base_bdevs_list": [ 00:18:28.172 { 00:18:28.172 "name": null, 00:18:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.172 "is_configured": false, 00:18:28.172 "data_offset": 0, 00:18:28.172 "data_size": 65536 00:18:28.172 }, 00:18:28.172 { 00:18:28.172 "name": "BaseBdev2", 00:18:28.172 "uuid": "286c315b-e99e-4824-84d4-0a54ec7b4ebe", 00:18:28.172 "is_configured": true, 00:18:28.172 "data_offset": 0, 00:18:28.172 "data_size": 65536 00:18:28.172 }, 00:18:28.172 { 00:18:28.172 "name": "BaseBdev3", 00:18:28.172 "uuid": "60257a31-747c-47cd-8b28-f1de50a203f2", 00:18:28.172 "is_configured": true, 00:18:28.172 "data_offset": 0, 00:18:28.172 "data_size": 65536 00:18:28.172 }, 00:18:28.172 { 00:18:28.172 "name": "BaseBdev4", 00:18:28.172 "uuid": "0ac6dfac-8366-4d44-9d32-9a8a6880769e", 00:18:28.172 "is_configured": true, 00:18:28.172 "data_offset": 0, 00:18:28.172 "data_size": 65536 00:18:28.172 } 00:18:28.172 ] 00:18:28.172 }' 00:18:28.172 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.172 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.431 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:28.431 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.431 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.431 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.431 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.431 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.691 [2024-10-08 16:26:21.792970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:28.691 [2024-10-08 16:26:21.793357] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.691 [2024-10-08 16:26:21.883030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.691 16:26:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.691 [2024-10-08 16:26:21.947150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.950 [2024-10-08 16:26:22.099376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:28.950 [2024-10-08 16:26:22.099462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.950 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.209 BaseBdev2 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.209 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.209 [ 00:18:29.209 { 00:18:29.209 "name": "BaseBdev2", 00:18:29.209 "aliases": [ 00:18:29.209 "732faedf-299c-4d73-84e6-edde40d32e1c" 00:18:29.209 ], 00:18:29.210 "product_name": "Malloc disk", 00:18:29.210 "block_size": 512, 00:18:29.210 "num_blocks": 65536, 00:18:29.210 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:29.210 "assigned_rate_limits": { 00:18:29.210 "rw_ios_per_sec": 0, 00:18:29.210 "rw_mbytes_per_sec": 0, 00:18:29.210 "r_mbytes_per_sec": 0, 00:18:29.210 "w_mbytes_per_sec": 0 00:18:29.210 }, 00:18:29.210 "claimed": false, 00:18:29.210 "zoned": false, 00:18:29.210 "supported_io_types": { 00:18:29.210 "read": true, 00:18:29.210 "write": true, 00:18:29.210 "unmap": true, 00:18:29.210 "flush": true, 00:18:29.210 "reset": true, 00:18:29.210 "nvme_admin": false, 00:18:29.210 "nvme_io": false, 00:18:29.210 "nvme_io_md": false, 00:18:29.210 "write_zeroes": true, 00:18:29.210 "zcopy": true, 00:18:29.210 "get_zone_info": false, 00:18:29.210 "zone_management": false, 00:18:29.210 "zone_append": false, 00:18:29.210 "compare": false, 00:18:29.210 "compare_and_write": false, 00:18:29.210 "abort": true, 00:18:29.210 "seek_hole": false, 00:18:29.210 "seek_data": false, 00:18:29.210 "copy": true, 00:18:29.210 "nvme_iov_md": false 00:18:29.210 }, 00:18:29.210 "memory_domains": [ 00:18:29.210 { 00:18:29.210 "dma_device_id": "system", 00:18:29.210 "dma_device_type": 1 00:18:29.210 }, 00:18:29.210 { 00:18:29.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.210 "dma_device_type": 2 00:18:29.210 } 00:18:29.210 ], 00:18:29.210 "driver_specific": {} 00:18:29.210 } 00:18:29.210 ] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.210 BaseBdev3 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.210 [ 00:18:29.210 { 00:18:29.210 "name": "BaseBdev3", 00:18:29.210 "aliases": [ 00:18:29.210 "057ee86f-504f-4ff2-b62d-ed7e5b76a243" 00:18:29.210 ], 00:18:29.210 "product_name": "Malloc disk", 00:18:29.210 "block_size": 512, 00:18:29.210 "num_blocks": 65536, 00:18:29.210 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:29.210 "assigned_rate_limits": { 00:18:29.210 "rw_ios_per_sec": 0, 00:18:29.210 "rw_mbytes_per_sec": 0, 00:18:29.210 "r_mbytes_per_sec": 0, 00:18:29.210 "w_mbytes_per_sec": 0 00:18:29.210 }, 00:18:29.210 "claimed": false, 00:18:29.210 "zoned": false, 00:18:29.210 "supported_io_types": { 00:18:29.210 "read": true, 00:18:29.210 "write": true, 00:18:29.210 "unmap": true, 00:18:29.210 "flush": true, 00:18:29.210 "reset": true, 00:18:29.210 "nvme_admin": false, 00:18:29.210 "nvme_io": false, 00:18:29.210 "nvme_io_md": false, 00:18:29.210 "write_zeroes": true, 00:18:29.210 "zcopy": true, 00:18:29.210 "get_zone_info": false, 00:18:29.210 "zone_management": false, 00:18:29.210 "zone_append": false, 00:18:29.210 "compare": false, 00:18:29.210 "compare_and_write": false, 00:18:29.210 "abort": true, 00:18:29.210 "seek_hole": false, 00:18:29.210 "seek_data": false, 00:18:29.210 "copy": true, 00:18:29.210 "nvme_iov_md": false 00:18:29.210 }, 00:18:29.210 "memory_domains": [ 00:18:29.210 { 00:18:29.210 "dma_device_id": "system", 00:18:29.210 "dma_device_type": 1 00:18:29.210 }, 00:18:29.210 { 00:18:29.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.210 "dma_device_type": 2 00:18:29.210 } 00:18:29.210 ], 00:18:29.210 "driver_specific": {} 00:18:29.210 } 00:18:29.210 ] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.210 BaseBdev4 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.210 [ 00:18:29.210 { 00:18:29.210 "name": "BaseBdev4", 00:18:29.210 "aliases": [ 00:18:29.210 "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd" 00:18:29.210 ], 00:18:29.210 "product_name": "Malloc disk", 00:18:29.210 "block_size": 512, 00:18:29.210 "num_blocks": 65536, 00:18:29.210 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:29.210 "assigned_rate_limits": { 00:18:29.210 "rw_ios_per_sec": 0, 00:18:29.210 "rw_mbytes_per_sec": 0, 00:18:29.210 "r_mbytes_per_sec": 0, 00:18:29.210 "w_mbytes_per_sec": 0 00:18:29.210 }, 00:18:29.210 "claimed": false, 00:18:29.210 "zoned": false, 00:18:29.210 "supported_io_types": { 00:18:29.210 "read": true, 00:18:29.210 "write": true, 00:18:29.210 "unmap": true, 00:18:29.210 "flush": true, 00:18:29.210 "reset": true, 00:18:29.210 "nvme_admin": false, 00:18:29.210 "nvme_io": false, 00:18:29.210 "nvme_io_md": false, 00:18:29.210 "write_zeroes": true, 00:18:29.210 "zcopy": true, 00:18:29.210 "get_zone_info": false, 00:18:29.210 "zone_management": false, 00:18:29.210 "zone_append": false, 00:18:29.210 "compare": false, 00:18:29.210 "compare_and_write": false, 00:18:29.210 "abort": true, 00:18:29.210 "seek_hole": false, 00:18:29.210 "seek_data": false, 00:18:29.210 "copy": true, 00:18:29.210 "nvme_iov_md": false 00:18:29.210 }, 00:18:29.210 "memory_domains": [ 00:18:29.210 { 00:18:29.210 "dma_device_id": "system", 00:18:29.210 "dma_device_type": 1 00:18:29.210 }, 00:18:29.210 { 00:18:29.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.210 "dma_device_type": 2 00:18:29.210 } 00:18:29.210 ], 00:18:29.210 "driver_specific": {} 00:18:29.210 } 00:18:29.210 ] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.210 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.211 [2024-10-08 16:26:22.475685] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:29.211 [2024-10-08 16:26:22.475984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:29.211 [2024-10-08 16:26:22.476032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.211 [2024-10-08 16:26:22.478504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.211 [2024-10-08 16:26:22.478595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.211 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.469 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.469 "name": "Existed_Raid", 00:18:29.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.469 "strip_size_kb": 64, 00:18:29.469 "state": "configuring", 00:18:29.469 "raid_level": "raid5f", 00:18:29.469 "superblock": false, 00:18:29.469 "num_base_bdevs": 4, 00:18:29.469 "num_base_bdevs_discovered": 3, 00:18:29.469 "num_base_bdevs_operational": 4, 00:18:29.469 "base_bdevs_list": [ 00:18:29.469 { 00:18:29.469 "name": "BaseBdev1", 00:18:29.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.469 "is_configured": false, 00:18:29.469 "data_offset": 0, 00:18:29.469 "data_size": 0 00:18:29.469 }, 00:18:29.469 { 00:18:29.469 "name": "BaseBdev2", 00:18:29.469 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:29.469 "is_configured": true, 00:18:29.469 "data_offset": 0, 00:18:29.469 "data_size": 65536 00:18:29.469 }, 00:18:29.469 { 00:18:29.469 "name": "BaseBdev3", 00:18:29.469 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:29.469 "is_configured": true, 00:18:29.469 "data_offset": 0, 00:18:29.469 "data_size": 65536 00:18:29.469 }, 00:18:29.469 { 00:18:29.469 "name": "BaseBdev4", 00:18:29.469 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:29.469 "is_configured": true, 00:18:29.469 "data_offset": 0, 00:18:29.469 "data_size": 65536 00:18:29.469 } 00:18:29.469 ] 00:18:29.469 }' 00:18:29.469 16:26:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.469 16:26:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.727 [2024-10-08 16:26:23.023873] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.727 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.986 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.986 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.986 "name": "Existed_Raid", 00:18:29.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.986 "strip_size_kb": 64, 00:18:29.986 "state": "configuring", 00:18:29.986 "raid_level": "raid5f", 00:18:29.986 "superblock": false, 00:18:29.986 "num_base_bdevs": 4, 00:18:29.986 "num_base_bdevs_discovered": 2, 00:18:29.986 "num_base_bdevs_operational": 4, 00:18:29.986 "base_bdevs_list": [ 00:18:29.986 { 00:18:29.986 "name": "BaseBdev1", 00:18:29.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.986 "is_configured": false, 00:18:29.986 "data_offset": 0, 00:18:29.986 "data_size": 0 00:18:29.986 }, 00:18:29.986 { 00:18:29.986 "name": null, 00:18:29.986 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:29.986 "is_configured": false, 00:18:29.986 "data_offset": 0, 00:18:29.986 "data_size": 65536 00:18:29.986 }, 00:18:29.986 { 00:18:29.986 "name": "BaseBdev3", 00:18:29.986 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:29.986 "is_configured": true, 00:18:29.986 "data_offset": 0, 00:18:29.986 "data_size": 65536 00:18:29.986 }, 00:18:29.986 { 00:18:29.986 "name": "BaseBdev4", 00:18:29.986 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:29.986 "is_configured": true, 00:18:29.986 "data_offset": 0, 00:18:29.986 "data_size": 65536 00:18:29.986 } 00:18:29.986 ] 00:18:29.986 }' 00:18:29.986 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.986 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.266 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.266 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.266 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.266 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.537 [2024-10-08 16:26:23.644506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.537 BaseBdev1 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.537 [ 00:18:30.537 { 00:18:30.537 "name": "BaseBdev1", 00:18:30.537 "aliases": [ 00:18:30.537 "c9c0a570-d083-4e0c-953b-1983746a8979" 00:18:30.537 ], 00:18:30.537 "product_name": "Malloc disk", 00:18:30.537 "block_size": 512, 00:18:30.537 "num_blocks": 65536, 00:18:30.537 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:30.537 "assigned_rate_limits": { 00:18:30.537 "rw_ios_per_sec": 0, 00:18:30.537 "rw_mbytes_per_sec": 0, 00:18:30.537 "r_mbytes_per_sec": 0, 00:18:30.537 "w_mbytes_per_sec": 0 00:18:30.537 }, 00:18:30.537 "claimed": true, 00:18:30.537 "claim_type": "exclusive_write", 00:18:30.537 "zoned": false, 00:18:30.537 "supported_io_types": { 00:18:30.537 "read": true, 00:18:30.537 "write": true, 00:18:30.537 "unmap": true, 00:18:30.537 "flush": true, 00:18:30.537 "reset": true, 00:18:30.537 "nvme_admin": false, 00:18:30.537 "nvme_io": false, 00:18:30.537 "nvme_io_md": false, 00:18:30.537 "write_zeroes": true, 00:18:30.537 "zcopy": true, 00:18:30.537 "get_zone_info": false, 00:18:30.537 "zone_management": false, 00:18:30.537 "zone_append": false, 00:18:30.537 "compare": false, 00:18:30.537 "compare_and_write": false, 00:18:30.537 "abort": true, 00:18:30.537 "seek_hole": false, 00:18:30.537 "seek_data": false, 00:18:30.537 "copy": true, 00:18:30.537 "nvme_iov_md": false 00:18:30.537 }, 00:18:30.537 "memory_domains": [ 00:18:30.537 { 00:18:30.537 "dma_device_id": "system", 00:18:30.537 "dma_device_type": 1 00:18:30.537 }, 00:18:30.537 { 00:18:30.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.537 "dma_device_type": 2 00:18:30.537 } 00:18:30.537 ], 00:18:30.537 "driver_specific": {} 00:18:30.537 } 00:18:30.537 ] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.537 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.538 "name": "Existed_Raid", 00:18:30.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.538 "strip_size_kb": 64, 00:18:30.538 "state": "configuring", 00:18:30.538 "raid_level": "raid5f", 00:18:30.538 "superblock": false, 00:18:30.538 "num_base_bdevs": 4, 00:18:30.538 "num_base_bdevs_discovered": 3, 00:18:30.538 "num_base_bdevs_operational": 4, 00:18:30.538 "base_bdevs_list": [ 00:18:30.538 { 00:18:30.538 "name": "BaseBdev1", 00:18:30.538 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:30.538 "is_configured": true, 00:18:30.538 "data_offset": 0, 00:18:30.538 "data_size": 65536 00:18:30.538 }, 00:18:30.538 { 00:18:30.538 "name": null, 00:18:30.538 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:30.538 "is_configured": false, 00:18:30.538 "data_offset": 0, 00:18:30.538 "data_size": 65536 00:18:30.538 }, 00:18:30.538 { 00:18:30.538 "name": "BaseBdev3", 00:18:30.538 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:30.538 "is_configured": true, 00:18:30.538 "data_offset": 0, 00:18:30.538 "data_size": 65536 00:18:30.538 }, 00:18:30.538 { 00:18:30.538 "name": "BaseBdev4", 00:18:30.538 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:30.538 "is_configured": true, 00:18:30.538 "data_offset": 0, 00:18:30.538 "data_size": 65536 00:18:30.538 } 00:18:30.538 ] 00:18:30.538 }' 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.538 16:26:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.105 [2024-10-08 16:26:24.264818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.105 "name": "Existed_Raid", 00:18:31.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.105 "strip_size_kb": 64, 00:18:31.105 "state": "configuring", 00:18:31.105 "raid_level": "raid5f", 00:18:31.105 "superblock": false, 00:18:31.105 "num_base_bdevs": 4, 00:18:31.105 "num_base_bdevs_discovered": 2, 00:18:31.105 "num_base_bdevs_operational": 4, 00:18:31.105 "base_bdevs_list": [ 00:18:31.105 { 00:18:31.105 "name": "BaseBdev1", 00:18:31.105 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:31.105 "is_configured": true, 00:18:31.105 "data_offset": 0, 00:18:31.105 "data_size": 65536 00:18:31.105 }, 00:18:31.105 { 00:18:31.105 "name": null, 00:18:31.105 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:31.105 "is_configured": false, 00:18:31.105 "data_offset": 0, 00:18:31.105 "data_size": 65536 00:18:31.105 }, 00:18:31.105 { 00:18:31.105 "name": null, 00:18:31.105 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:31.105 "is_configured": false, 00:18:31.105 "data_offset": 0, 00:18:31.105 "data_size": 65536 00:18:31.105 }, 00:18:31.105 { 00:18:31.105 "name": "BaseBdev4", 00:18:31.105 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:31.105 "is_configured": true, 00:18:31.105 "data_offset": 0, 00:18:31.105 "data_size": 65536 00:18:31.105 } 00:18:31.105 ] 00:18:31.105 }' 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.105 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.670 [2024-10-08 16:26:24.841072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.670 "name": "Existed_Raid", 00:18:31.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.670 "strip_size_kb": 64, 00:18:31.670 "state": "configuring", 00:18:31.670 "raid_level": "raid5f", 00:18:31.670 "superblock": false, 00:18:31.670 "num_base_bdevs": 4, 00:18:31.670 "num_base_bdevs_discovered": 3, 00:18:31.670 "num_base_bdevs_operational": 4, 00:18:31.670 "base_bdevs_list": [ 00:18:31.670 { 00:18:31.670 "name": "BaseBdev1", 00:18:31.670 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:31.670 "is_configured": true, 00:18:31.670 "data_offset": 0, 00:18:31.670 "data_size": 65536 00:18:31.670 }, 00:18:31.670 { 00:18:31.670 "name": null, 00:18:31.670 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:31.670 "is_configured": false, 00:18:31.670 "data_offset": 0, 00:18:31.670 "data_size": 65536 00:18:31.670 }, 00:18:31.670 { 00:18:31.670 "name": "BaseBdev3", 00:18:31.670 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:31.670 "is_configured": true, 00:18:31.670 "data_offset": 0, 00:18:31.670 "data_size": 65536 00:18:31.670 }, 00:18:31.670 { 00:18:31.670 "name": "BaseBdev4", 00:18:31.670 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:31.670 "is_configured": true, 00:18:31.670 "data_offset": 0, 00:18:31.670 "data_size": 65536 00:18:31.670 } 00:18:31.670 ] 00:18:31.670 }' 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.670 16:26:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.237 [2024-10-08 16:26:25.425242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.237 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.496 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.496 "name": "Existed_Raid", 00:18:32.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.496 "strip_size_kb": 64, 00:18:32.496 "state": "configuring", 00:18:32.496 "raid_level": "raid5f", 00:18:32.496 "superblock": false, 00:18:32.496 "num_base_bdevs": 4, 00:18:32.496 "num_base_bdevs_discovered": 2, 00:18:32.496 "num_base_bdevs_operational": 4, 00:18:32.496 "base_bdevs_list": [ 00:18:32.496 { 00:18:32.496 "name": null, 00:18:32.496 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:32.496 "is_configured": false, 00:18:32.496 "data_offset": 0, 00:18:32.496 "data_size": 65536 00:18:32.496 }, 00:18:32.496 { 00:18:32.496 "name": null, 00:18:32.496 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:32.496 "is_configured": false, 00:18:32.496 "data_offset": 0, 00:18:32.496 "data_size": 65536 00:18:32.496 }, 00:18:32.496 { 00:18:32.496 "name": "BaseBdev3", 00:18:32.496 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:32.496 "is_configured": true, 00:18:32.496 "data_offset": 0, 00:18:32.496 "data_size": 65536 00:18:32.496 }, 00:18:32.496 { 00:18:32.496 "name": "BaseBdev4", 00:18:32.496 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:32.496 "is_configured": true, 00:18:32.496 "data_offset": 0, 00:18:32.496 "data_size": 65536 00:18:32.496 } 00:18:32.496 ] 00:18:32.496 }' 00:18:32.496 16:26:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.496 16:26:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.755 [2024-10-08 16:26:26.068166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.755 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.013 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.013 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.013 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.013 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.013 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.013 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.013 "name": "Existed_Raid", 00:18:33.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.013 "strip_size_kb": 64, 00:18:33.013 "state": "configuring", 00:18:33.013 "raid_level": "raid5f", 00:18:33.013 "superblock": false, 00:18:33.013 "num_base_bdevs": 4, 00:18:33.013 "num_base_bdevs_discovered": 3, 00:18:33.013 "num_base_bdevs_operational": 4, 00:18:33.013 "base_bdevs_list": [ 00:18:33.013 { 00:18:33.013 "name": null, 00:18:33.013 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:33.013 "is_configured": false, 00:18:33.013 "data_offset": 0, 00:18:33.013 "data_size": 65536 00:18:33.013 }, 00:18:33.013 { 00:18:33.013 "name": "BaseBdev2", 00:18:33.013 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:33.013 "is_configured": true, 00:18:33.013 "data_offset": 0, 00:18:33.013 "data_size": 65536 00:18:33.013 }, 00:18:33.013 { 00:18:33.013 "name": "BaseBdev3", 00:18:33.013 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:33.013 "is_configured": true, 00:18:33.013 "data_offset": 0, 00:18:33.013 "data_size": 65536 00:18:33.013 }, 00:18:33.013 { 00:18:33.013 "name": "BaseBdev4", 00:18:33.013 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:33.013 "is_configured": true, 00:18:33.013 "data_offset": 0, 00:18:33.013 "data_size": 65536 00:18:33.013 } 00:18:33.013 ] 00:18:33.013 }' 00:18:33.014 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.014 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.271 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.271 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:33.271 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.271 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.271 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c9c0a570-d083-4e0c-953b-1983746a8979 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.531 [2024-10-08 16:26:26.702588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:33.531 [2024-10-08 16:26:26.702657] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:33.531 [2024-10-08 16:26:26.702669] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:33.531 [2024-10-08 16:26:26.703027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:33.531 [2024-10-08 16:26:26.709583] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:33.531 [2024-10-08 16:26:26.709614] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:33.531 [2024-10-08 16:26:26.709922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.531 NewBaseBdev 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.531 [ 00:18:33.531 { 00:18:33.531 "name": "NewBaseBdev", 00:18:33.531 "aliases": [ 00:18:33.531 "c9c0a570-d083-4e0c-953b-1983746a8979" 00:18:33.531 ], 00:18:33.531 "product_name": "Malloc disk", 00:18:33.531 "block_size": 512, 00:18:33.531 "num_blocks": 65536, 00:18:33.531 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:33.531 "assigned_rate_limits": { 00:18:33.531 "rw_ios_per_sec": 0, 00:18:33.531 "rw_mbytes_per_sec": 0, 00:18:33.531 "r_mbytes_per_sec": 0, 00:18:33.531 "w_mbytes_per_sec": 0 00:18:33.531 }, 00:18:33.531 "claimed": true, 00:18:33.531 "claim_type": "exclusive_write", 00:18:33.531 "zoned": false, 00:18:33.531 "supported_io_types": { 00:18:33.531 "read": true, 00:18:33.531 "write": true, 00:18:33.531 "unmap": true, 00:18:33.531 "flush": true, 00:18:33.531 "reset": true, 00:18:33.531 "nvme_admin": false, 00:18:33.531 "nvme_io": false, 00:18:33.531 "nvme_io_md": false, 00:18:33.531 "write_zeroes": true, 00:18:33.531 "zcopy": true, 00:18:33.531 "get_zone_info": false, 00:18:33.531 "zone_management": false, 00:18:33.531 "zone_append": false, 00:18:33.531 "compare": false, 00:18:33.531 "compare_and_write": false, 00:18:33.531 "abort": true, 00:18:33.531 "seek_hole": false, 00:18:33.531 "seek_data": false, 00:18:33.531 "copy": true, 00:18:33.531 "nvme_iov_md": false 00:18:33.531 }, 00:18:33.531 "memory_domains": [ 00:18:33.531 { 00:18:33.531 "dma_device_id": "system", 00:18:33.531 "dma_device_type": 1 00:18:33.531 }, 00:18:33.531 { 00:18:33.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.531 "dma_device_type": 2 00:18:33.531 } 00:18:33.531 ], 00:18:33.531 "driver_specific": {} 00:18:33.531 } 00:18:33.531 ] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:33.531 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.532 "name": "Existed_Raid", 00:18:33.532 "uuid": "10f3ff0c-0e57-4d5b-8619-7a30b6db732c", 00:18:33.532 "strip_size_kb": 64, 00:18:33.532 "state": "online", 00:18:33.532 "raid_level": "raid5f", 00:18:33.532 "superblock": false, 00:18:33.532 "num_base_bdevs": 4, 00:18:33.532 "num_base_bdevs_discovered": 4, 00:18:33.532 "num_base_bdevs_operational": 4, 00:18:33.532 "base_bdevs_list": [ 00:18:33.532 { 00:18:33.532 "name": "NewBaseBdev", 00:18:33.532 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:33.532 "is_configured": true, 00:18:33.532 "data_offset": 0, 00:18:33.532 "data_size": 65536 00:18:33.532 }, 00:18:33.532 { 00:18:33.532 "name": "BaseBdev2", 00:18:33.532 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:33.532 "is_configured": true, 00:18:33.532 "data_offset": 0, 00:18:33.532 "data_size": 65536 00:18:33.532 }, 00:18:33.532 { 00:18:33.532 "name": "BaseBdev3", 00:18:33.532 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:33.532 "is_configured": true, 00:18:33.532 "data_offset": 0, 00:18:33.532 "data_size": 65536 00:18:33.532 }, 00:18:33.532 { 00:18:33.532 "name": "BaseBdev4", 00:18:33.532 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:33.532 "is_configured": true, 00:18:33.532 "data_offset": 0, 00:18:33.532 "data_size": 65536 00:18:33.532 } 00:18:33.532 ] 00:18:33.532 }' 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.532 16:26:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.100 [2024-10-08 16:26:27.269496] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.100 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:34.100 "name": "Existed_Raid", 00:18:34.100 "aliases": [ 00:18:34.100 "10f3ff0c-0e57-4d5b-8619-7a30b6db732c" 00:18:34.100 ], 00:18:34.100 "product_name": "Raid Volume", 00:18:34.100 "block_size": 512, 00:18:34.100 "num_blocks": 196608, 00:18:34.100 "uuid": "10f3ff0c-0e57-4d5b-8619-7a30b6db732c", 00:18:34.100 "assigned_rate_limits": { 00:18:34.100 "rw_ios_per_sec": 0, 00:18:34.100 "rw_mbytes_per_sec": 0, 00:18:34.100 "r_mbytes_per_sec": 0, 00:18:34.100 "w_mbytes_per_sec": 0 00:18:34.100 }, 00:18:34.100 "claimed": false, 00:18:34.100 "zoned": false, 00:18:34.100 "supported_io_types": { 00:18:34.100 "read": true, 00:18:34.100 "write": true, 00:18:34.100 "unmap": false, 00:18:34.100 "flush": false, 00:18:34.100 "reset": true, 00:18:34.100 "nvme_admin": false, 00:18:34.100 "nvme_io": false, 00:18:34.100 "nvme_io_md": false, 00:18:34.100 "write_zeroes": true, 00:18:34.100 "zcopy": false, 00:18:34.100 "get_zone_info": false, 00:18:34.100 "zone_management": false, 00:18:34.100 "zone_append": false, 00:18:34.100 "compare": false, 00:18:34.100 "compare_and_write": false, 00:18:34.100 "abort": false, 00:18:34.100 "seek_hole": false, 00:18:34.100 "seek_data": false, 00:18:34.100 "copy": false, 00:18:34.100 "nvme_iov_md": false 00:18:34.100 }, 00:18:34.100 "driver_specific": { 00:18:34.100 "raid": { 00:18:34.100 "uuid": "10f3ff0c-0e57-4d5b-8619-7a30b6db732c", 00:18:34.100 "strip_size_kb": 64, 00:18:34.100 "state": "online", 00:18:34.100 "raid_level": "raid5f", 00:18:34.100 "superblock": false, 00:18:34.100 "num_base_bdevs": 4, 00:18:34.100 "num_base_bdevs_discovered": 4, 00:18:34.100 "num_base_bdevs_operational": 4, 00:18:34.100 "base_bdevs_list": [ 00:18:34.100 { 00:18:34.100 "name": "NewBaseBdev", 00:18:34.100 "uuid": "c9c0a570-d083-4e0c-953b-1983746a8979", 00:18:34.100 "is_configured": true, 00:18:34.100 "data_offset": 0, 00:18:34.100 "data_size": 65536 00:18:34.100 }, 00:18:34.100 { 00:18:34.100 "name": "BaseBdev2", 00:18:34.100 "uuid": "732faedf-299c-4d73-84e6-edde40d32e1c", 00:18:34.100 "is_configured": true, 00:18:34.100 "data_offset": 0, 00:18:34.100 "data_size": 65536 00:18:34.100 }, 00:18:34.100 { 00:18:34.100 "name": "BaseBdev3", 00:18:34.100 "uuid": "057ee86f-504f-4ff2-b62d-ed7e5b76a243", 00:18:34.100 "is_configured": true, 00:18:34.100 "data_offset": 0, 00:18:34.100 "data_size": 65536 00:18:34.100 }, 00:18:34.100 { 00:18:34.100 "name": "BaseBdev4", 00:18:34.101 "uuid": "70af04ed-4dae-4bfb-9c6a-9d2f0900dccd", 00:18:34.101 "is_configured": true, 00:18:34.101 "data_offset": 0, 00:18:34.101 "data_size": 65536 00:18:34.101 } 00:18:34.101 ] 00:18:34.101 } 00:18:34.101 } 00:18:34.101 }' 00:18:34.101 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.101 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:34.101 BaseBdev2 00:18:34.101 BaseBdev3 00:18:34.101 BaseBdev4' 00:18:34.101 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.359 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.360 [2024-10-08 16:26:27.645247] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.360 [2024-10-08 16:26:27.645290] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.360 [2024-10-08 16:26:27.645385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.360 [2024-10-08 16:26:27.645793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.360 [2024-10-08 16:26:27.645812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83533 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83533 ']' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83533 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.360 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83533 00:18:34.618 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.618 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.618 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83533' 00:18:34.618 killing process with pid 83533 00:18:34.618 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83533 00:18:34.618 [2024-10-08 16:26:27.692140] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.618 16:26:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83533 00:18:34.876 [2024-10-08 16:26:28.028623] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.247 16:26:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:36.247 ************************************ 00:18:36.247 END TEST raid5f_state_function_test 00:18:36.247 ************************************ 00:18:36.247 00:18:36.247 real 0m13.254s 00:18:36.247 user 0m21.771s 00:18:36.247 sys 0m1.930s 00:18:36.247 16:26:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.247 16:26:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.247 16:26:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:36.247 16:26:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:36.247 16:26:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.247 16:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.247 ************************************ 00:18:36.247 START TEST raid5f_state_function_test_sb 00:18:36.247 ************************************ 00:18:36.247 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:36.248 Process raid pid: 84217 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84217 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84217' 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84217 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84217 ']' 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.248 16:26:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.248 [2024-10-08 16:26:29.472458] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:36.248 [2024-10-08 16:26:29.472666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.506 [2024-10-08 16:26:29.645828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.765 [2024-10-08 16:26:29.950317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.022 [2024-10-08 16:26:30.206470] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.022 [2024-10-08 16:26:30.206533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.279 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.279 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:37.279 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:37.279 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.279 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.279 [2024-10-08 16:26:30.594904] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.279 [2024-10-08 16:26:30.595333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.279 [2024-10-08 16:26:30.595363] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.279 [2024-10-08 16:26:30.595380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.279 [2024-10-08 16:26:30.595390] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:37.279 [2024-10-08 16:26:30.595403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:37.279 [2024-10-08 16:26:30.595412] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:37.279 [2024-10-08 16:26:30.595425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.537 "name": "Existed_Raid", 00:18:37.537 "uuid": "3a40c392-b453-473b-8834-8a4d823d89d3", 00:18:37.537 "strip_size_kb": 64, 00:18:37.537 "state": "configuring", 00:18:37.537 "raid_level": "raid5f", 00:18:37.537 "superblock": true, 00:18:37.537 "num_base_bdevs": 4, 00:18:37.537 "num_base_bdevs_discovered": 0, 00:18:37.537 "num_base_bdevs_operational": 4, 00:18:37.537 "base_bdevs_list": [ 00:18:37.537 { 00:18:37.537 "name": "BaseBdev1", 00:18:37.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.537 "is_configured": false, 00:18:37.537 "data_offset": 0, 00:18:37.537 "data_size": 0 00:18:37.537 }, 00:18:37.537 { 00:18:37.537 "name": "BaseBdev2", 00:18:37.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.537 "is_configured": false, 00:18:37.537 "data_offset": 0, 00:18:37.537 "data_size": 0 00:18:37.537 }, 00:18:37.537 { 00:18:37.537 "name": "BaseBdev3", 00:18:37.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.537 "is_configured": false, 00:18:37.537 "data_offset": 0, 00:18:37.537 "data_size": 0 00:18:37.537 }, 00:18:37.537 { 00:18:37.537 "name": "BaseBdev4", 00:18:37.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.537 "is_configured": false, 00:18:37.537 "data_offset": 0, 00:18:37.537 "data_size": 0 00:18:37.537 } 00:18:37.537 ] 00:18:37.537 }' 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.537 16:26:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.794 [2024-10-08 16:26:31.110876] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.794 [2024-10-08 16:26:31.110962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.794 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.052 [2024-10-08 16:26:31.118888] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.052 [2024-10-08 16:26:31.118963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.052 [2024-10-08 16:26:31.118977] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.052 [2024-10-08 16:26:31.118991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.052 [2024-10-08 16:26:31.118999] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.052 [2024-10-08 16:26:31.119012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.052 [2024-10-08 16:26:31.119021] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:38.052 [2024-10-08 16:26:31.119034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.052 [2024-10-08 16:26:31.169868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.052 BaseBdev1 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.052 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.052 [ 00:18:38.052 { 00:18:38.052 "name": "BaseBdev1", 00:18:38.052 "aliases": [ 00:18:38.052 "98ec939f-58f6-4817-a685-519b9ee9addb" 00:18:38.052 ], 00:18:38.052 "product_name": "Malloc disk", 00:18:38.052 "block_size": 512, 00:18:38.052 "num_blocks": 65536, 00:18:38.052 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:38.052 "assigned_rate_limits": { 00:18:38.052 "rw_ios_per_sec": 0, 00:18:38.052 "rw_mbytes_per_sec": 0, 00:18:38.052 "r_mbytes_per_sec": 0, 00:18:38.052 "w_mbytes_per_sec": 0 00:18:38.052 }, 00:18:38.052 "claimed": true, 00:18:38.052 "claim_type": "exclusive_write", 00:18:38.052 "zoned": false, 00:18:38.052 "supported_io_types": { 00:18:38.052 "read": true, 00:18:38.052 "write": true, 00:18:38.052 "unmap": true, 00:18:38.052 "flush": true, 00:18:38.052 "reset": true, 00:18:38.052 "nvme_admin": false, 00:18:38.052 "nvme_io": false, 00:18:38.052 "nvme_io_md": false, 00:18:38.052 "write_zeroes": true, 00:18:38.052 "zcopy": true, 00:18:38.052 "get_zone_info": false, 00:18:38.052 "zone_management": false, 00:18:38.052 "zone_append": false, 00:18:38.052 "compare": false, 00:18:38.052 "compare_and_write": false, 00:18:38.052 "abort": true, 00:18:38.052 "seek_hole": false, 00:18:38.052 "seek_data": false, 00:18:38.052 "copy": true, 00:18:38.053 "nvme_iov_md": false 00:18:38.053 }, 00:18:38.053 "memory_domains": [ 00:18:38.053 { 00:18:38.053 "dma_device_id": "system", 00:18:38.053 "dma_device_type": 1 00:18:38.053 }, 00:18:38.053 { 00:18:38.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.053 "dma_device_type": 2 00:18:38.053 } 00:18:38.053 ], 00:18:38.053 "driver_specific": {} 00:18:38.053 } 00:18:38.053 ] 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.053 "name": "Existed_Raid", 00:18:38.053 "uuid": "fd5bbf34-66ec-4287-8306-954a9b5ce747", 00:18:38.053 "strip_size_kb": 64, 00:18:38.053 "state": "configuring", 00:18:38.053 "raid_level": "raid5f", 00:18:38.053 "superblock": true, 00:18:38.053 "num_base_bdevs": 4, 00:18:38.053 "num_base_bdevs_discovered": 1, 00:18:38.053 "num_base_bdevs_operational": 4, 00:18:38.053 "base_bdevs_list": [ 00:18:38.053 { 00:18:38.053 "name": "BaseBdev1", 00:18:38.053 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:38.053 "is_configured": true, 00:18:38.053 "data_offset": 2048, 00:18:38.053 "data_size": 63488 00:18:38.053 }, 00:18:38.053 { 00:18:38.053 "name": "BaseBdev2", 00:18:38.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.053 "is_configured": false, 00:18:38.053 "data_offset": 0, 00:18:38.053 "data_size": 0 00:18:38.053 }, 00:18:38.053 { 00:18:38.053 "name": "BaseBdev3", 00:18:38.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.053 "is_configured": false, 00:18:38.053 "data_offset": 0, 00:18:38.053 "data_size": 0 00:18:38.053 }, 00:18:38.053 { 00:18:38.053 "name": "BaseBdev4", 00:18:38.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.053 "is_configured": false, 00:18:38.053 "data_offset": 0, 00:18:38.053 "data_size": 0 00:18:38.053 } 00:18:38.053 ] 00:18:38.053 }' 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.053 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.620 [2024-10-08 16:26:31.726109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.620 [2024-10-08 16:26:31.726182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.620 [2024-10-08 16:26:31.734151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.620 [2024-10-08 16:26:31.736899] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.620 [2024-10-08 16:26:31.737124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.620 [2024-10-08 16:26:31.737250] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.620 [2024-10-08 16:26:31.737409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.620 [2024-10-08 16:26:31.737560] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:38.620 [2024-10-08 16:26:31.737707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.620 "name": "Existed_Raid", 00:18:38.620 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:38.620 "strip_size_kb": 64, 00:18:38.620 "state": "configuring", 00:18:38.620 "raid_level": "raid5f", 00:18:38.620 "superblock": true, 00:18:38.620 "num_base_bdevs": 4, 00:18:38.620 "num_base_bdevs_discovered": 1, 00:18:38.620 "num_base_bdevs_operational": 4, 00:18:38.620 "base_bdevs_list": [ 00:18:38.620 { 00:18:38.620 "name": "BaseBdev1", 00:18:38.620 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:38.620 "is_configured": true, 00:18:38.620 "data_offset": 2048, 00:18:38.620 "data_size": 63488 00:18:38.620 }, 00:18:38.620 { 00:18:38.620 "name": "BaseBdev2", 00:18:38.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.620 "is_configured": false, 00:18:38.620 "data_offset": 0, 00:18:38.620 "data_size": 0 00:18:38.620 }, 00:18:38.620 { 00:18:38.620 "name": "BaseBdev3", 00:18:38.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.620 "is_configured": false, 00:18:38.620 "data_offset": 0, 00:18:38.620 "data_size": 0 00:18:38.620 }, 00:18:38.620 { 00:18:38.620 "name": "BaseBdev4", 00:18:38.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.620 "is_configured": false, 00:18:38.620 "data_offset": 0, 00:18:38.620 "data_size": 0 00:18:38.620 } 00:18:38.620 ] 00:18:38.620 }' 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.620 16:26:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.188 [2024-10-08 16:26:32.293284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.188 BaseBdev2 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.188 [ 00:18:39.188 { 00:18:39.188 "name": "BaseBdev2", 00:18:39.188 "aliases": [ 00:18:39.188 "3ee03d3e-b451-47df-8426-27ee0f47926f" 00:18:39.188 ], 00:18:39.188 "product_name": "Malloc disk", 00:18:39.188 "block_size": 512, 00:18:39.188 "num_blocks": 65536, 00:18:39.188 "uuid": "3ee03d3e-b451-47df-8426-27ee0f47926f", 00:18:39.188 "assigned_rate_limits": { 00:18:39.188 "rw_ios_per_sec": 0, 00:18:39.188 "rw_mbytes_per_sec": 0, 00:18:39.188 "r_mbytes_per_sec": 0, 00:18:39.188 "w_mbytes_per_sec": 0 00:18:39.188 }, 00:18:39.188 "claimed": true, 00:18:39.188 "claim_type": "exclusive_write", 00:18:39.188 "zoned": false, 00:18:39.188 "supported_io_types": { 00:18:39.188 "read": true, 00:18:39.188 "write": true, 00:18:39.188 "unmap": true, 00:18:39.188 "flush": true, 00:18:39.188 "reset": true, 00:18:39.188 "nvme_admin": false, 00:18:39.188 "nvme_io": false, 00:18:39.188 "nvme_io_md": false, 00:18:39.188 "write_zeroes": true, 00:18:39.188 "zcopy": true, 00:18:39.188 "get_zone_info": false, 00:18:39.188 "zone_management": false, 00:18:39.188 "zone_append": false, 00:18:39.188 "compare": false, 00:18:39.188 "compare_and_write": false, 00:18:39.188 "abort": true, 00:18:39.188 "seek_hole": false, 00:18:39.188 "seek_data": false, 00:18:39.188 "copy": true, 00:18:39.188 "nvme_iov_md": false 00:18:39.188 }, 00:18:39.188 "memory_domains": [ 00:18:39.188 { 00:18:39.188 "dma_device_id": "system", 00:18:39.188 "dma_device_type": 1 00:18:39.188 }, 00:18:39.188 { 00:18:39.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.188 "dma_device_type": 2 00:18:39.188 } 00:18:39.188 ], 00:18:39.188 "driver_specific": {} 00:18:39.188 } 00:18:39.188 ] 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.188 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.188 "name": "Existed_Raid", 00:18:39.188 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:39.188 "strip_size_kb": 64, 00:18:39.188 "state": "configuring", 00:18:39.188 "raid_level": "raid5f", 00:18:39.188 "superblock": true, 00:18:39.188 "num_base_bdevs": 4, 00:18:39.188 "num_base_bdevs_discovered": 2, 00:18:39.188 "num_base_bdevs_operational": 4, 00:18:39.188 "base_bdevs_list": [ 00:18:39.188 { 00:18:39.188 "name": "BaseBdev1", 00:18:39.188 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:39.188 "is_configured": true, 00:18:39.188 "data_offset": 2048, 00:18:39.188 "data_size": 63488 00:18:39.188 }, 00:18:39.188 { 00:18:39.189 "name": "BaseBdev2", 00:18:39.189 "uuid": "3ee03d3e-b451-47df-8426-27ee0f47926f", 00:18:39.189 "is_configured": true, 00:18:39.189 "data_offset": 2048, 00:18:39.189 "data_size": 63488 00:18:39.189 }, 00:18:39.189 { 00:18:39.189 "name": "BaseBdev3", 00:18:39.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.189 "is_configured": false, 00:18:39.189 "data_offset": 0, 00:18:39.189 "data_size": 0 00:18:39.189 }, 00:18:39.189 { 00:18:39.189 "name": "BaseBdev4", 00:18:39.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.189 "is_configured": false, 00:18:39.189 "data_offset": 0, 00:18:39.189 "data_size": 0 00:18:39.189 } 00:18:39.189 ] 00:18:39.189 }' 00:18:39.189 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.189 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.755 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:39.755 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.755 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.756 [2024-10-08 16:26:32.899402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.756 BaseBdev3 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.756 [ 00:18:39.756 { 00:18:39.756 "name": "BaseBdev3", 00:18:39.756 "aliases": [ 00:18:39.756 "9554f5a4-e7c7-4add-9d69-10e5457199fa" 00:18:39.756 ], 00:18:39.756 "product_name": "Malloc disk", 00:18:39.756 "block_size": 512, 00:18:39.756 "num_blocks": 65536, 00:18:39.756 "uuid": "9554f5a4-e7c7-4add-9d69-10e5457199fa", 00:18:39.756 "assigned_rate_limits": { 00:18:39.756 "rw_ios_per_sec": 0, 00:18:39.756 "rw_mbytes_per_sec": 0, 00:18:39.756 "r_mbytes_per_sec": 0, 00:18:39.756 "w_mbytes_per_sec": 0 00:18:39.756 }, 00:18:39.756 "claimed": true, 00:18:39.756 "claim_type": "exclusive_write", 00:18:39.756 "zoned": false, 00:18:39.756 "supported_io_types": { 00:18:39.756 "read": true, 00:18:39.756 "write": true, 00:18:39.756 "unmap": true, 00:18:39.756 "flush": true, 00:18:39.756 "reset": true, 00:18:39.756 "nvme_admin": false, 00:18:39.756 "nvme_io": false, 00:18:39.756 "nvme_io_md": false, 00:18:39.756 "write_zeroes": true, 00:18:39.756 "zcopy": true, 00:18:39.756 "get_zone_info": false, 00:18:39.756 "zone_management": false, 00:18:39.756 "zone_append": false, 00:18:39.756 "compare": false, 00:18:39.756 "compare_and_write": false, 00:18:39.756 "abort": true, 00:18:39.756 "seek_hole": false, 00:18:39.756 "seek_data": false, 00:18:39.756 "copy": true, 00:18:39.756 "nvme_iov_md": false 00:18:39.756 }, 00:18:39.756 "memory_domains": [ 00:18:39.756 { 00:18:39.756 "dma_device_id": "system", 00:18:39.756 "dma_device_type": 1 00:18:39.756 }, 00:18:39.756 { 00:18:39.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.756 "dma_device_type": 2 00:18:39.756 } 00:18:39.756 ], 00:18:39.756 "driver_specific": {} 00:18:39.756 } 00:18:39.756 ] 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.756 "name": "Existed_Raid", 00:18:39.756 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:39.756 "strip_size_kb": 64, 00:18:39.756 "state": "configuring", 00:18:39.756 "raid_level": "raid5f", 00:18:39.756 "superblock": true, 00:18:39.756 "num_base_bdevs": 4, 00:18:39.756 "num_base_bdevs_discovered": 3, 00:18:39.756 "num_base_bdevs_operational": 4, 00:18:39.756 "base_bdevs_list": [ 00:18:39.756 { 00:18:39.756 "name": "BaseBdev1", 00:18:39.756 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:39.756 "is_configured": true, 00:18:39.756 "data_offset": 2048, 00:18:39.756 "data_size": 63488 00:18:39.756 }, 00:18:39.756 { 00:18:39.756 "name": "BaseBdev2", 00:18:39.756 "uuid": "3ee03d3e-b451-47df-8426-27ee0f47926f", 00:18:39.756 "is_configured": true, 00:18:39.756 "data_offset": 2048, 00:18:39.756 "data_size": 63488 00:18:39.756 }, 00:18:39.756 { 00:18:39.756 "name": "BaseBdev3", 00:18:39.756 "uuid": "9554f5a4-e7c7-4add-9d69-10e5457199fa", 00:18:39.756 "is_configured": true, 00:18:39.756 "data_offset": 2048, 00:18:39.756 "data_size": 63488 00:18:39.756 }, 00:18:39.756 { 00:18:39.756 "name": "BaseBdev4", 00:18:39.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.756 "is_configured": false, 00:18:39.756 "data_offset": 0, 00:18:39.756 "data_size": 0 00:18:39.756 } 00:18:39.756 ] 00:18:39.756 }' 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.756 16:26:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.322 [2024-10-08 16:26:33.501418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:40.322 [2024-10-08 16:26:33.501838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:40.322 [2024-10-08 16:26:33.501858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:40.322 [2024-10-08 16:26:33.502208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.322 BaseBdev4 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.322 [2024-10-08 16:26:33.509220] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:40.322 [2024-10-08 16:26:33.509419] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:40.322 [2024-10-08 16:26:33.509812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.322 [ 00:18:40.322 { 00:18:40.322 "name": "BaseBdev4", 00:18:40.322 "aliases": [ 00:18:40.322 "fa6df70e-9054-4860-892f-37cc27320e58" 00:18:40.322 ], 00:18:40.322 "product_name": "Malloc disk", 00:18:40.322 "block_size": 512, 00:18:40.322 "num_blocks": 65536, 00:18:40.322 "uuid": "fa6df70e-9054-4860-892f-37cc27320e58", 00:18:40.322 "assigned_rate_limits": { 00:18:40.322 "rw_ios_per_sec": 0, 00:18:40.322 "rw_mbytes_per_sec": 0, 00:18:40.322 "r_mbytes_per_sec": 0, 00:18:40.322 "w_mbytes_per_sec": 0 00:18:40.322 }, 00:18:40.322 "claimed": true, 00:18:40.322 "claim_type": "exclusive_write", 00:18:40.322 "zoned": false, 00:18:40.322 "supported_io_types": { 00:18:40.322 "read": true, 00:18:40.322 "write": true, 00:18:40.322 "unmap": true, 00:18:40.322 "flush": true, 00:18:40.322 "reset": true, 00:18:40.322 "nvme_admin": false, 00:18:40.322 "nvme_io": false, 00:18:40.322 "nvme_io_md": false, 00:18:40.322 "write_zeroes": true, 00:18:40.322 "zcopy": true, 00:18:40.322 "get_zone_info": false, 00:18:40.322 "zone_management": false, 00:18:40.322 "zone_append": false, 00:18:40.322 "compare": false, 00:18:40.322 "compare_and_write": false, 00:18:40.322 "abort": true, 00:18:40.322 "seek_hole": false, 00:18:40.322 "seek_data": false, 00:18:40.322 "copy": true, 00:18:40.322 "nvme_iov_md": false 00:18:40.322 }, 00:18:40.322 "memory_domains": [ 00:18:40.322 { 00:18:40.322 "dma_device_id": "system", 00:18:40.322 "dma_device_type": 1 00:18:40.322 }, 00:18:40.322 { 00:18:40.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.322 "dma_device_type": 2 00:18:40.322 } 00:18:40.322 ], 00:18:40.322 "driver_specific": {} 00:18:40.322 } 00:18:40.322 ] 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.322 "name": "Existed_Raid", 00:18:40.322 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:40.322 "strip_size_kb": 64, 00:18:40.322 "state": "online", 00:18:40.322 "raid_level": "raid5f", 00:18:40.322 "superblock": true, 00:18:40.322 "num_base_bdevs": 4, 00:18:40.322 "num_base_bdevs_discovered": 4, 00:18:40.322 "num_base_bdevs_operational": 4, 00:18:40.322 "base_bdevs_list": [ 00:18:40.322 { 00:18:40.322 "name": "BaseBdev1", 00:18:40.322 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:40.322 "is_configured": true, 00:18:40.322 "data_offset": 2048, 00:18:40.322 "data_size": 63488 00:18:40.322 }, 00:18:40.322 { 00:18:40.322 "name": "BaseBdev2", 00:18:40.322 "uuid": "3ee03d3e-b451-47df-8426-27ee0f47926f", 00:18:40.322 "is_configured": true, 00:18:40.322 "data_offset": 2048, 00:18:40.322 "data_size": 63488 00:18:40.322 }, 00:18:40.322 { 00:18:40.322 "name": "BaseBdev3", 00:18:40.322 "uuid": "9554f5a4-e7c7-4add-9d69-10e5457199fa", 00:18:40.322 "is_configured": true, 00:18:40.322 "data_offset": 2048, 00:18:40.322 "data_size": 63488 00:18:40.322 }, 00:18:40.322 { 00:18:40.322 "name": "BaseBdev4", 00:18:40.322 "uuid": "fa6df70e-9054-4860-892f-37cc27320e58", 00:18:40.322 "is_configured": true, 00:18:40.322 "data_offset": 2048, 00:18:40.322 "data_size": 63488 00:18:40.322 } 00:18:40.322 ] 00:18:40.322 }' 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.322 16:26:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.888 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.889 [2024-10-08 16:26:34.069609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.889 "name": "Existed_Raid", 00:18:40.889 "aliases": [ 00:18:40.889 "0b0c6004-c9cd-4551-8035-affcbaaf8232" 00:18:40.889 ], 00:18:40.889 "product_name": "Raid Volume", 00:18:40.889 "block_size": 512, 00:18:40.889 "num_blocks": 190464, 00:18:40.889 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:40.889 "assigned_rate_limits": { 00:18:40.889 "rw_ios_per_sec": 0, 00:18:40.889 "rw_mbytes_per_sec": 0, 00:18:40.889 "r_mbytes_per_sec": 0, 00:18:40.889 "w_mbytes_per_sec": 0 00:18:40.889 }, 00:18:40.889 "claimed": false, 00:18:40.889 "zoned": false, 00:18:40.889 "supported_io_types": { 00:18:40.889 "read": true, 00:18:40.889 "write": true, 00:18:40.889 "unmap": false, 00:18:40.889 "flush": false, 00:18:40.889 "reset": true, 00:18:40.889 "nvme_admin": false, 00:18:40.889 "nvme_io": false, 00:18:40.889 "nvme_io_md": false, 00:18:40.889 "write_zeroes": true, 00:18:40.889 "zcopy": false, 00:18:40.889 "get_zone_info": false, 00:18:40.889 "zone_management": false, 00:18:40.889 "zone_append": false, 00:18:40.889 "compare": false, 00:18:40.889 "compare_and_write": false, 00:18:40.889 "abort": false, 00:18:40.889 "seek_hole": false, 00:18:40.889 "seek_data": false, 00:18:40.889 "copy": false, 00:18:40.889 "nvme_iov_md": false 00:18:40.889 }, 00:18:40.889 "driver_specific": { 00:18:40.889 "raid": { 00:18:40.889 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:40.889 "strip_size_kb": 64, 00:18:40.889 "state": "online", 00:18:40.889 "raid_level": "raid5f", 00:18:40.889 "superblock": true, 00:18:40.889 "num_base_bdevs": 4, 00:18:40.889 "num_base_bdevs_discovered": 4, 00:18:40.889 "num_base_bdevs_operational": 4, 00:18:40.889 "base_bdevs_list": [ 00:18:40.889 { 00:18:40.889 "name": "BaseBdev1", 00:18:40.889 "uuid": "98ec939f-58f6-4817-a685-519b9ee9addb", 00:18:40.889 "is_configured": true, 00:18:40.889 "data_offset": 2048, 00:18:40.889 "data_size": 63488 00:18:40.889 }, 00:18:40.889 { 00:18:40.889 "name": "BaseBdev2", 00:18:40.889 "uuid": "3ee03d3e-b451-47df-8426-27ee0f47926f", 00:18:40.889 "is_configured": true, 00:18:40.889 "data_offset": 2048, 00:18:40.889 "data_size": 63488 00:18:40.889 }, 00:18:40.889 { 00:18:40.889 "name": "BaseBdev3", 00:18:40.889 "uuid": "9554f5a4-e7c7-4add-9d69-10e5457199fa", 00:18:40.889 "is_configured": true, 00:18:40.889 "data_offset": 2048, 00:18:40.889 "data_size": 63488 00:18:40.889 }, 00:18:40.889 { 00:18:40.889 "name": "BaseBdev4", 00:18:40.889 "uuid": "fa6df70e-9054-4860-892f-37cc27320e58", 00:18:40.889 "is_configured": true, 00:18:40.889 "data_offset": 2048, 00:18:40.889 "data_size": 63488 00:18:40.889 } 00:18:40.889 ] 00:18:40.889 } 00:18:40.889 } 00:18:40.889 }' 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:40.889 BaseBdev2 00:18:40.889 BaseBdev3 00:18:40.889 BaseBdev4' 00:18:40.889 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.147 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.148 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.148 [2024-10-08 16:26:34.445494] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.406 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.406 "name": "Existed_Raid", 00:18:41.406 "uuid": "0b0c6004-c9cd-4551-8035-affcbaaf8232", 00:18:41.406 "strip_size_kb": 64, 00:18:41.406 "state": "online", 00:18:41.406 "raid_level": "raid5f", 00:18:41.406 "superblock": true, 00:18:41.406 "num_base_bdevs": 4, 00:18:41.406 "num_base_bdevs_discovered": 3, 00:18:41.406 "num_base_bdevs_operational": 3, 00:18:41.406 "base_bdevs_list": [ 00:18:41.406 { 00:18:41.406 "name": null, 00:18:41.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.406 "is_configured": false, 00:18:41.406 "data_offset": 0, 00:18:41.406 "data_size": 63488 00:18:41.406 }, 00:18:41.406 { 00:18:41.406 "name": "BaseBdev2", 00:18:41.406 "uuid": "3ee03d3e-b451-47df-8426-27ee0f47926f", 00:18:41.406 "is_configured": true, 00:18:41.406 "data_offset": 2048, 00:18:41.406 "data_size": 63488 00:18:41.406 }, 00:18:41.406 { 00:18:41.406 "name": "BaseBdev3", 00:18:41.406 "uuid": "9554f5a4-e7c7-4add-9d69-10e5457199fa", 00:18:41.406 "is_configured": true, 00:18:41.406 "data_offset": 2048, 00:18:41.406 "data_size": 63488 00:18:41.406 }, 00:18:41.406 { 00:18:41.406 "name": "BaseBdev4", 00:18:41.407 "uuid": "fa6df70e-9054-4860-892f-37cc27320e58", 00:18:41.407 "is_configured": true, 00:18:41.407 "data_offset": 2048, 00:18:41.407 "data_size": 63488 00:18:41.407 } 00:18:41.407 ] 00:18:41.407 }' 00:18:41.407 16:26:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.407 16:26:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.975 [2024-10-08 16:26:35.105180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:41.975 [2024-10-08 16:26:35.105757] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.975 [2024-10-08 16:26:35.189658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.975 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.975 [2024-10-08 16:26:35.249710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.260 [2024-10-08 16:26:35.394978] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:42.260 [2024-10-08 16:26:35.395051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.260 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.519 BaseBdev2 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.519 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 [ 00:18:42.520 { 00:18:42.520 "name": "BaseBdev2", 00:18:42.520 "aliases": [ 00:18:42.520 "a13d5dbb-916a-4bc4-9994-3a0bac0423d1" 00:18:42.520 ], 00:18:42.520 "product_name": "Malloc disk", 00:18:42.520 "block_size": 512, 00:18:42.520 "num_blocks": 65536, 00:18:42.520 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:42.520 "assigned_rate_limits": { 00:18:42.520 "rw_ios_per_sec": 0, 00:18:42.520 "rw_mbytes_per_sec": 0, 00:18:42.520 "r_mbytes_per_sec": 0, 00:18:42.520 "w_mbytes_per_sec": 0 00:18:42.520 }, 00:18:42.520 "claimed": false, 00:18:42.520 "zoned": false, 00:18:42.520 "supported_io_types": { 00:18:42.520 "read": true, 00:18:42.520 "write": true, 00:18:42.520 "unmap": true, 00:18:42.520 "flush": true, 00:18:42.520 "reset": true, 00:18:42.520 "nvme_admin": false, 00:18:42.520 "nvme_io": false, 00:18:42.520 "nvme_io_md": false, 00:18:42.520 "write_zeroes": true, 00:18:42.520 "zcopy": true, 00:18:42.520 "get_zone_info": false, 00:18:42.520 "zone_management": false, 00:18:42.520 "zone_append": false, 00:18:42.520 "compare": false, 00:18:42.520 "compare_and_write": false, 00:18:42.520 "abort": true, 00:18:42.520 "seek_hole": false, 00:18:42.520 "seek_data": false, 00:18:42.520 "copy": true, 00:18:42.520 "nvme_iov_md": false 00:18:42.520 }, 00:18:42.520 "memory_domains": [ 00:18:42.520 { 00:18:42.520 "dma_device_id": "system", 00:18:42.520 "dma_device_type": 1 00:18:42.520 }, 00:18:42.520 { 00:18:42.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.520 "dma_device_type": 2 00:18:42.520 } 00:18:42.520 ], 00:18:42.520 "driver_specific": {} 00:18:42.520 } 00:18:42.520 ] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 BaseBdev3 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 [ 00:18:42.520 { 00:18:42.520 "name": "BaseBdev3", 00:18:42.520 "aliases": [ 00:18:42.520 "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5" 00:18:42.520 ], 00:18:42.520 "product_name": "Malloc disk", 00:18:42.520 "block_size": 512, 00:18:42.520 "num_blocks": 65536, 00:18:42.520 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:42.520 "assigned_rate_limits": { 00:18:42.520 "rw_ios_per_sec": 0, 00:18:42.520 "rw_mbytes_per_sec": 0, 00:18:42.520 "r_mbytes_per_sec": 0, 00:18:42.520 "w_mbytes_per_sec": 0 00:18:42.520 }, 00:18:42.520 "claimed": false, 00:18:42.520 "zoned": false, 00:18:42.520 "supported_io_types": { 00:18:42.520 "read": true, 00:18:42.520 "write": true, 00:18:42.520 "unmap": true, 00:18:42.520 "flush": true, 00:18:42.520 "reset": true, 00:18:42.520 "nvme_admin": false, 00:18:42.520 "nvme_io": false, 00:18:42.520 "nvme_io_md": false, 00:18:42.520 "write_zeroes": true, 00:18:42.520 "zcopy": true, 00:18:42.520 "get_zone_info": false, 00:18:42.520 "zone_management": false, 00:18:42.520 "zone_append": false, 00:18:42.520 "compare": false, 00:18:42.520 "compare_and_write": false, 00:18:42.520 "abort": true, 00:18:42.520 "seek_hole": false, 00:18:42.520 "seek_data": false, 00:18:42.520 "copy": true, 00:18:42.520 "nvme_iov_md": false 00:18:42.520 }, 00:18:42.520 "memory_domains": [ 00:18:42.520 { 00:18:42.520 "dma_device_id": "system", 00:18:42.520 "dma_device_type": 1 00:18:42.520 }, 00:18:42.520 { 00:18:42.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.520 "dma_device_type": 2 00:18:42.520 } 00:18:42.520 ], 00:18:42.520 "driver_specific": {} 00:18:42.520 } 00:18:42.520 ] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 BaseBdev4 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 [ 00:18:42.520 { 00:18:42.520 "name": "BaseBdev4", 00:18:42.520 "aliases": [ 00:18:42.520 "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c" 00:18:42.520 ], 00:18:42.520 "product_name": "Malloc disk", 00:18:42.520 "block_size": 512, 00:18:42.520 "num_blocks": 65536, 00:18:42.520 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:42.520 "assigned_rate_limits": { 00:18:42.520 "rw_ios_per_sec": 0, 00:18:42.520 "rw_mbytes_per_sec": 0, 00:18:42.520 "r_mbytes_per_sec": 0, 00:18:42.520 "w_mbytes_per_sec": 0 00:18:42.520 }, 00:18:42.520 "claimed": false, 00:18:42.520 "zoned": false, 00:18:42.520 "supported_io_types": { 00:18:42.520 "read": true, 00:18:42.520 "write": true, 00:18:42.520 "unmap": true, 00:18:42.520 "flush": true, 00:18:42.520 "reset": true, 00:18:42.520 "nvme_admin": false, 00:18:42.520 "nvme_io": false, 00:18:42.520 "nvme_io_md": false, 00:18:42.520 "write_zeroes": true, 00:18:42.520 "zcopy": true, 00:18:42.520 "get_zone_info": false, 00:18:42.520 "zone_management": false, 00:18:42.520 "zone_append": false, 00:18:42.520 "compare": false, 00:18:42.520 "compare_and_write": false, 00:18:42.520 "abort": true, 00:18:42.520 "seek_hole": false, 00:18:42.520 "seek_data": false, 00:18:42.520 "copy": true, 00:18:42.520 "nvme_iov_md": false 00:18:42.520 }, 00:18:42.520 "memory_domains": [ 00:18:42.520 { 00:18:42.520 "dma_device_id": "system", 00:18:42.520 "dma_device_type": 1 00:18:42.520 }, 00:18:42.520 { 00:18:42.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.520 "dma_device_type": 2 00:18:42.520 } 00:18:42.520 ], 00:18:42.520 "driver_specific": {} 00:18:42.520 } 00:18:42.520 ] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 [2024-10-08 16:26:35.764798] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.521 [2024-10-08 16:26:35.765092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.521 [2024-10-08 16:26:35.765289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.521 [2024-10-08 16:26:35.767776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.521 [2024-10-08 16:26:35.767984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.521 "name": "Existed_Raid", 00:18:42.521 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:42.521 "strip_size_kb": 64, 00:18:42.521 "state": "configuring", 00:18:42.521 "raid_level": "raid5f", 00:18:42.521 "superblock": true, 00:18:42.521 "num_base_bdevs": 4, 00:18:42.521 "num_base_bdevs_discovered": 3, 00:18:42.521 "num_base_bdevs_operational": 4, 00:18:42.521 "base_bdevs_list": [ 00:18:42.521 { 00:18:42.521 "name": "BaseBdev1", 00:18:42.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.521 "is_configured": false, 00:18:42.521 "data_offset": 0, 00:18:42.521 "data_size": 0 00:18:42.521 }, 00:18:42.521 { 00:18:42.521 "name": "BaseBdev2", 00:18:42.521 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:42.521 "is_configured": true, 00:18:42.521 "data_offset": 2048, 00:18:42.521 "data_size": 63488 00:18:42.521 }, 00:18:42.521 { 00:18:42.521 "name": "BaseBdev3", 00:18:42.521 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:42.521 "is_configured": true, 00:18:42.521 "data_offset": 2048, 00:18:42.521 "data_size": 63488 00:18:42.521 }, 00:18:42.521 { 00:18:42.521 "name": "BaseBdev4", 00:18:42.521 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:42.521 "is_configured": true, 00:18:42.521 "data_offset": 2048, 00:18:42.521 "data_size": 63488 00:18:42.521 } 00:18:42.521 ] 00:18:42.521 }' 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.521 16:26:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.088 [2024-10-08 16:26:36.280959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.088 "name": "Existed_Raid", 00:18:43.088 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:43.088 "strip_size_kb": 64, 00:18:43.088 "state": "configuring", 00:18:43.088 "raid_level": "raid5f", 00:18:43.088 "superblock": true, 00:18:43.088 "num_base_bdevs": 4, 00:18:43.088 "num_base_bdevs_discovered": 2, 00:18:43.088 "num_base_bdevs_operational": 4, 00:18:43.088 "base_bdevs_list": [ 00:18:43.088 { 00:18:43.088 "name": "BaseBdev1", 00:18:43.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.088 "is_configured": false, 00:18:43.088 "data_offset": 0, 00:18:43.088 "data_size": 0 00:18:43.088 }, 00:18:43.088 { 00:18:43.088 "name": null, 00:18:43.088 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:43.088 "is_configured": false, 00:18:43.088 "data_offset": 0, 00:18:43.088 "data_size": 63488 00:18:43.088 }, 00:18:43.088 { 00:18:43.088 "name": "BaseBdev3", 00:18:43.088 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:43.088 "is_configured": true, 00:18:43.088 "data_offset": 2048, 00:18:43.088 "data_size": 63488 00:18:43.088 }, 00:18:43.088 { 00:18:43.088 "name": "BaseBdev4", 00:18:43.088 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:43.088 "is_configured": true, 00:18:43.088 "data_offset": 2048, 00:18:43.088 "data_size": 63488 00:18:43.088 } 00:18:43.088 ] 00:18:43.088 }' 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.088 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.655 [2024-10-08 16:26:36.930817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.655 BaseBdev1 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.655 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.656 [ 00:18:43.656 { 00:18:43.656 "name": "BaseBdev1", 00:18:43.656 "aliases": [ 00:18:43.656 "028db70c-ed9a-4cf3-a81f-d3f0230f6236" 00:18:43.656 ], 00:18:43.656 "product_name": "Malloc disk", 00:18:43.656 "block_size": 512, 00:18:43.656 "num_blocks": 65536, 00:18:43.656 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:43.656 "assigned_rate_limits": { 00:18:43.656 "rw_ios_per_sec": 0, 00:18:43.656 "rw_mbytes_per_sec": 0, 00:18:43.656 "r_mbytes_per_sec": 0, 00:18:43.656 "w_mbytes_per_sec": 0 00:18:43.656 }, 00:18:43.656 "claimed": true, 00:18:43.656 "claim_type": "exclusive_write", 00:18:43.656 "zoned": false, 00:18:43.656 "supported_io_types": { 00:18:43.656 "read": true, 00:18:43.656 "write": true, 00:18:43.656 "unmap": true, 00:18:43.656 "flush": true, 00:18:43.656 "reset": true, 00:18:43.656 "nvme_admin": false, 00:18:43.656 "nvme_io": false, 00:18:43.656 "nvme_io_md": false, 00:18:43.656 "write_zeroes": true, 00:18:43.656 "zcopy": true, 00:18:43.656 "get_zone_info": false, 00:18:43.656 "zone_management": false, 00:18:43.656 "zone_append": false, 00:18:43.656 "compare": false, 00:18:43.656 "compare_and_write": false, 00:18:43.656 "abort": true, 00:18:43.656 "seek_hole": false, 00:18:43.656 "seek_data": false, 00:18:43.656 "copy": true, 00:18:43.656 "nvme_iov_md": false 00:18:43.656 }, 00:18:43.656 "memory_domains": [ 00:18:43.656 { 00:18:43.656 "dma_device_id": "system", 00:18:43.656 "dma_device_type": 1 00:18:43.656 }, 00:18:43.656 { 00:18:43.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.656 "dma_device_type": 2 00:18:43.656 } 00:18:43.656 ], 00:18:43.656 "driver_specific": {} 00:18:43.656 } 00:18:43.656 ] 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.656 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.915 16:26:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.915 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.915 "name": "Existed_Raid", 00:18:43.915 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:43.915 "strip_size_kb": 64, 00:18:43.915 "state": "configuring", 00:18:43.915 "raid_level": "raid5f", 00:18:43.915 "superblock": true, 00:18:43.915 "num_base_bdevs": 4, 00:18:43.915 "num_base_bdevs_discovered": 3, 00:18:43.915 "num_base_bdevs_operational": 4, 00:18:43.915 "base_bdevs_list": [ 00:18:43.915 { 00:18:43.915 "name": "BaseBdev1", 00:18:43.915 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:43.915 "is_configured": true, 00:18:43.915 "data_offset": 2048, 00:18:43.915 "data_size": 63488 00:18:43.915 }, 00:18:43.915 { 00:18:43.915 "name": null, 00:18:43.915 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:43.915 "is_configured": false, 00:18:43.915 "data_offset": 0, 00:18:43.915 "data_size": 63488 00:18:43.915 }, 00:18:43.915 { 00:18:43.915 "name": "BaseBdev3", 00:18:43.915 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:43.915 "is_configured": true, 00:18:43.915 "data_offset": 2048, 00:18:43.915 "data_size": 63488 00:18:43.915 }, 00:18:43.915 { 00:18:43.915 "name": "BaseBdev4", 00:18:43.915 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:43.915 "is_configured": true, 00:18:43.915 "data_offset": 2048, 00:18:43.915 "data_size": 63488 00:18:43.915 } 00:18:43.915 ] 00:18:43.915 }' 00:18:43.915 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.915 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.173 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.173 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:44.173 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.173 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.432 [2024-10-08 16:26:37.543059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.432 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.433 "name": "Existed_Raid", 00:18:44.433 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:44.433 "strip_size_kb": 64, 00:18:44.433 "state": "configuring", 00:18:44.433 "raid_level": "raid5f", 00:18:44.433 "superblock": true, 00:18:44.433 "num_base_bdevs": 4, 00:18:44.433 "num_base_bdevs_discovered": 2, 00:18:44.433 "num_base_bdevs_operational": 4, 00:18:44.433 "base_bdevs_list": [ 00:18:44.433 { 00:18:44.433 "name": "BaseBdev1", 00:18:44.433 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:44.433 "is_configured": true, 00:18:44.433 "data_offset": 2048, 00:18:44.433 "data_size": 63488 00:18:44.433 }, 00:18:44.433 { 00:18:44.433 "name": null, 00:18:44.433 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:44.433 "is_configured": false, 00:18:44.433 "data_offset": 0, 00:18:44.433 "data_size": 63488 00:18:44.433 }, 00:18:44.433 { 00:18:44.433 "name": null, 00:18:44.433 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:44.433 "is_configured": false, 00:18:44.433 "data_offset": 0, 00:18:44.433 "data_size": 63488 00:18:44.433 }, 00:18:44.433 { 00:18:44.433 "name": "BaseBdev4", 00:18:44.433 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:44.433 "is_configured": true, 00:18:44.433 "data_offset": 2048, 00:18:44.433 "data_size": 63488 00:18:44.433 } 00:18:44.433 ] 00:18:44.433 }' 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.433 16:26:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.001 [2024-10-08 16:26:38.135241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.001 "name": "Existed_Raid", 00:18:45.001 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:45.001 "strip_size_kb": 64, 00:18:45.001 "state": "configuring", 00:18:45.001 "raid_level": "raid5f", 00:18:45.001 "superblock": true, 00:18:45.001 "num_base_bdevs": 4, 00:18:45.001 "num_base_bdevs_discovered": 3, 00:18:45.001 "num_base_bdevs_operational": 4, 00:18:45.001 "base_bdevs_list": [ 00:18:45.001 { 00:18:45.001 "name": "BaseBdev1", 00:18:45.001 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:45.001 "is_configured": true, 00:18:45.001 "data_offset": 2048, 00:18:45.001 "data_size": 63488 00:18:45.001 }, 00:18:45.001 { 00:18:45.001 "name": null, 00:18:45.001 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:45.001 "is_configured": false, 00:18:45.001 "data_offset": 0, 00:18:45.001 "data_size": 63488 00:18:45.001 }, 00:18:45.001 { 00:18:45.001 "name": "BaseBdev3", 00:18:45.001 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:45.001 "is_configured": true, 00:18:45.001 "data_offset": 2048, 00:18:45.001 "data_size": 63488 00:18:45.001 }, 00:18:45.001 { 00:18:45.001 "name": "BaseBdev4", 00:18:45.001 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:45.001 "is_configured": true, 00:18:45.001 "data_offset": 2048, 00:18:45.001 "data_size": 63488 00:18:45.001 } 00:18:45.001 ] 00:18:45.001 }' 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.001 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 [2024-10-08 16:26:38.699439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.583 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.583 "name": "Existed_Raid", 00:18:45.583 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:45.583 "strip_size_kb": 64, 00:18:45.583 "state": "configuring", 00:18:45.583 "raid_level": "raid5f", 00:18:45.583 "superblock": true, 00:18:45.583 "num_base_bdevs": 4, 00:18:45.583 "num_base_bdevs_discovered": 2, 00:18:45.583 "num_base_bdevs_operational": 4, 00:18:45.583 "base_bdevs_list": [ 00:18:45.583 { 00:18:45.583 "name": null, 00:18:45.583 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:45.583 "is_configured": false, 00:18:45.583 "data_offset": 0, 00:18:45.583 "data_size": 63488 00:18:45.583 }, 00:18:45.583 { 00:18:45.583 "name": null, 00:18:45.584 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:45.584 "is_configured": false, 00:18:45.584 "data_offset": 0, 00:18:45.584 "data_size": 63488 00:18:45.584 }, 00:18:45.584 { 00:18:45.584 "name": "BaseBdev3", 00:18:45.584 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:45.584 "is_configured": true, 00:18:45.584 "data_offset": 2048, 00:18:45.584 "data_size": 63488 00:18:45.584 }, 00:18:45.584 { 00:18:45.584 "name": "BaseBdev4", 00:18:45.584 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:45.584 "is_configured": true, 00:18:45.584 "data_offset": 2048, 00:18:45.584 "data_size": 63488 00:18:45.584 } 00:18:45.584 ] 00:18:45.584 }' 00:18:45.584 16:26:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.584 16:26:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.195 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.196 [2024-10-08 16:26:39.391838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.196 "name": "Existed_Raid", 00:18:46.196 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:46.196 "strip_size_kb": 64, 00:18:46.196 "state": "configuring", 00:18:46.196 "raid_level": "raid5f", 00:18:46.196 "superblock": true, 00:18:46.196 "num_base_bdevs": 4, 00:18:46.196 "num_base_bdevs_discovered": 3, 00:18:46.196 "num_base_bdevs_operational": 4, 00:18:46.196 "base_bdevs_list": [ 00:18:46.196 { 00:18:46.196 "name": null, 00:18:46.196 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:46.196 "is_configured": false, 00:18:46.196 "data_offset": 0, 00:18:46.196 "data_size": 63488 00:18:46.196 }, 00:18:46.196 { 00:18:46.196 "name": "BaseBdev2", 00:18:46.196 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:46.196 "is_configured": true, 00:18:46.196 "data_offset": 2048, 00:18:46.196 "data_size": 63488 00:18:46.196 }, 00:18:46.196 { 00:18:46.196 "name": "BaseBdev3", 00:18:46.196 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:46.196 "is_configured": true, 00:18:46.196 "data_offset": 2048, 00:18:46.196 "data_size": 63488 00:18:46.196 }, 00:18:46.196 { 00:18:46.196 "name": "BaseBdev4", 00:18:46.196 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:46.196 "is_configured": true, 00:18:46.196 "data_offset": 2048, 00:18:46.196 "data_size": 63488 00:18:46.196 } 00:18:46.196 ] 00:18:46.196 }' 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.196 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.763 16:26:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 028db70c-ed9a-4cf3-a81f-d3f0230f6236 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.763 [2024-10-08 16:26:40.043637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:46.763 [2024-10-08 16:26:40.044277] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:46.763 [2024-10-08 16:26:40.044304] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:46.763 NewBaseBdev 00:18:46.763 [2024-10-08 16:26:40.044665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.763 [2024-10-08 16:26:40.051185] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:46.763 [2024-10-08 16:26:40.051379] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:46.763 [2024-10-08 16:26:40.051751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.763 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.763 [ 00:18:46.763 { 00:18:46.763 "name": "NewBaseBdev", 00:18:46.763 "aliases": [ 00:18:46.763 "028db70c-ed9a-4cf3-a81f-d3f0230f6236" 00:18:46.763 ], 00:18:46.763 "product_name": "Malloc disk", 00:18:46.763 "block_size": 512, 00:18:46.763 "num_blocks": 65536, 00:18:46.763 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:46.763 "assigned_rate_limits": { 00:18:46.763 "rw_ios_per_sec": 0, 00:18:46.763 "rw_mbytes_per_sec": 0, 00:18:46.763 "r_mbytes_per_sec": 0, 00:18:46.763 "w_mbytes_per_sec": 0 00:18:46.763 }, 00:18:46.763 "claimed": true, 00:18:46.763 "claim_type": "exclusive_write", 00:18:46.763 "zoned": false, 00:18:46.763 "supported_io_types": { 00:18:46.763 "read": true, 00:18:46.763 "write": true, 00:18:46.763 "unmap": true, 00:18:46.763 "flush": true, 00:18:46.763 "reset": true, 00:18:46.763 "nvme_admin": false, 00:18:46.763 "nvme_io": false, 00:18:46.763 "nvme_io_md": false, 00:18:46.763 "write_zeroes": true, 00:18:46.763 "zcopy": true, 00:18:46.763 "get_zone_info": false, 00:18:46.763 "zone_management": false, 00:18:46.763 "zone_append": false, 00:18:46.763 "compare": false, 00:18:46.763 "compare_and_write": false, 00:18:46.763 "abort": true, 00:18:46.763 "seek_hole": false, 00:18:46.763 "seek_data": false, 00:18:46.763 "copy": true, 00:18:46.763 "nvme_iov_md": false 00:18:46.763 }, 00:18:46.763 "memory_domains": [ 00:18:46.763 { 00:18:46.763 "dma_device_id": "system", 00:18:46.763 "dma_device_type": 1 00:18:46.763 }, 00:18:47.022 { 00:18:47.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.022 "dma_device_type": 2 00:18:47.022 } 00:18:47.022 ], 00:18:47.022 "driver_specific": {} 00:18:47.022 } 00:18:47.022 ] 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.022 "name": "Existed_Raid", 00:18:47.022 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:47.022 "strip_size_kb": 64, 00:18:47.022 "state": "online", 00:18:47.022 "raid_level": "raid5f", 00:18:47.022 "superblock": true, 00:18:47.022 "num_base_bdevs": 4, 00:18:47.022 "num_base_bdevs_discovered": 4, 00:18:47.022 "num_base_bdevs_operational": 4, 00:18:47.022 "base_bdevs_list": [ 00:18:47.022 { 00:18:47.022 "name": "NewBaseBdev", 00:18:47.022 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:47.022 "is_configured": true, 00:18:47.022 "data_offset": 2048, 00:18:47.022 "data_size": 63488 00:18:47.022 }, 00:18:47.022 { 00:18:47.022 "name": "BaseBdev2", 00:18:47.022 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:47.022 "is_configured": true, 00:18:47.022 "data_offset": 2048, 00:18:47.022 "data_size": 63488 00:18:47.022 }, 00:18:47.022 { 00:18:47.022 "name": "BaseBdev3", 00:18:47.022 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:47.022 "is_configured": true, 00:18:47.022 "data_offset": 2048, 00:18:47.022 "data_size": 63488 00:18:47.022 }, 00:18:47.022 { 00:18:47.022 "name": "BaseBdev4", 00:18:47.022 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:47.022 "is_configured": true, 00:18:47.022 "data_offset": 2048, 00:18:47.022 "data_size": 63488 00:18:47.022 } 00:18:47.022 ] 00:18:47.022 }' 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.022 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.589 [2024-10-08 16:26:40.623809] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.589 "name": "Existed_Raid", 00:18:47.589 "aliases": [ 00:18:47.589 "2c4065fe-8696-457f-863d-c6cffa141949" 00:18:47.589 ], 00:18:47.589 "product_name": "Raid Volume", 00:18:47.589 "block_size": 512, 00:18:47.589 "num_blocks": 190464, 00:18:47.589 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:47.589 "assigned_rate_limits": { 00:18:47.589 "rw_ios_per_sec": 0, 00:18:47.589 "rw_mbytes_per_sec": 0, 00:18:47.589 "r_mbytes_per_sec": 0, 00:18:47.589 "w_mbytes_per_sec": 0 00:18:47.589 }, 00:18:47.589 "claimed": false, 00:18:47.589 "zoned": false, 00:18:47.589 "supported_io_types": { 00:18:47.589 "read": true, 00:18:47.589 "write": true, 00:18:47.589 "unmap": false, 00:18:47.589 "flush": false, 00:18:47.589 "reset": true, 00:18:47.589 "nvme_admin": false, 00:18:47.589 "nvme_io": false, 00:18:47.589 "nvme_io_md": false, 00:18:47.589 "write_zeroes": true, 00:18:47.589 "zcopy": false, 00:18:47.589 "get_zone_info": false, 00:18:47.589 "zone_management": false, 00:18:47.589 "zone_append": false, 00:18:47.589 "compare": false, 00:18:47.589 "compare_and_write": false, 00:18:47.589 "abort": false, 00:18:47.589 "seek_hole": false, 00:18:47.589 "seek_data": false, 00:18:47.589 "copy": false, 00:18:47.589 "nvme_iov_md": false 00:18:47.589 }, 00:18:47.589 "driver_specific": { 00:18:47.589 "raid": { 00:18:47.589 "uuid": "2c4065fe-8696-457f-863d-c6cffa141949", 00:18:47.589 "strip_size_kb": 64, 00:18:47.589 "state": "online", 00:18:47.589 "raid_level": "raid5f", 00:18:47.589 "superblock": true, 00:18:47.589 "num_base_bdevs": 4, 00:18:47.589 "num_base_bdevs_discovered": 4, 00:18:47.589 "num_base_bdevs_operational": 4, 00:18:47.589 "base_bdevs_list": [ 00:18:47.589 { 00:18:47.589 "name": "NewBaseBdev", 00:18:47.589 "uuid": "028db70c-ed9a-4cf3-a81f-d3f0230f6236", 00:18:47.589 "is_configured": true, 00:18:47.589 "data_offset": 2048, 00:18:47.589 "data_size": 63488 00:18:47.589 }, 00:18:47.589 { 00:18:47.589 "name": "BaseBdev2", 00:18:47.589 "uuid": "a13d5dbb-916a-4bc4-9994-3a0bac0423d1", 00:18:47.589 "is_configured": true, 00:18:47.589 "data_offset": 2048, 00:18:47.589 "data_size": 63488 00:18:47.589 }, 00:18:47.589 { 00:18:47.589 "name": "BaseBdev3", 00:18:47.589 "uuid": "3d4c11e9-5fea-40f7-8d15-a87e8bc17fa5", 00:18:47.589 "is_configured": true, 00:18:47.589 "data_offset": 2048, 00:18:47.589 "data_size": 63488 00:18:47.589 }, 00:18:47.589 { 00:18:47.589 "name": "BaseBdev4", 00:18:47.589 "uuid": "3fa3d2f3-161c-4b4d-b1e0-1e9db3943f1c", 00:18:47.589 "is_configured": true, 00:18:47.589 "data_offset": 2048, 00:18:47.589 "data_size": 63488 00:18:47.589 } 00:18:47.589 ] 00:18:47.589 } 00:18:47.589 } 00:18:47.589 }' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:47.589 BaseBdev2 00:18:47.589 BaseBdev3 00:18:47.589 BaseBdev4' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.589 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.848 16:26:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.848 [2024-10-08 16:26:41.019714] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.848 [2024-10-08 16:26:41.019762] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.848 [2024-10-08 16:26:41.019869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.848 [2024-10-08 16:26:41.020246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.848 [2024-10-08 16:26:41.020265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84217 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84217 ']' 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84217 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84217 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.848 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.848 killing process with pid 84217 00:18:47.849 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84217' 00:18:47.849 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84217 00:18:47.849 [2024-10-08 16:26:41.063943] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.849 16:26:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84217 00:18:48.107 [2024-10-08 16:26:41.399520] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.485 16:26:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:49.485 ************************************ 00:18:49.485 END TEST raid5f_state_function_test_sb 00:18:49.485 ************************************ 00:18:49.485 00:18:49.485 real 0m13.205s 00:18:49.485 user 0m21.741s 00:18:49.485 sys 0m1.951s 00:18:49.485 16:26:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.485 16:26:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.485 16:26:42 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:49.485 16:26:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:49.485 16:26:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.485 16:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.485 ************************************ 00:18:49.485 START TEST raid5f_superblock_test 00:18:49.485 ************************************ 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84900 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84900 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84900 ']' 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.485 16:26:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.485 [2024-10-08 16:26:42.739786] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:49.485 [2024-10-08 16:26:42.740197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84900 ] 00:18:49.743 [2024-10-08 16:26:42.902153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.002 [2024-10-08 16:26:43.120668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.260 [2024-10-08 16:26:43.327913] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.260 [2024-10-08 16:26:43.327990] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.519 malloc1 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.519 [2024-10-08 16:26:43.702446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:50.519 [2024-10-08 16:26:43.702737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.519 [2024-10-08 16:26:43.702824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:50.519 [2024-10-08 16:26:43.702963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.519 [2024-10-08 16:26:43.705891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.519 [2024-10-08 16:26:43.706065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:50.519 pt1 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.519 malloc2 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.519 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.520 [2024-10-08 16:26:43.782317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:50.520 [2024-10-08 16:26:43.782590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.520 [2024-10-08 16:26:43.782789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:50.520 [2024-10-08 16:26:43.782821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.520 [2024-10-08 16:26:43.786221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.520 [2024-10-08 16:26:43.786276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:50.520 pt2 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.520 malloc3 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.520 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.794 [2024-10-08 16:26:43.846859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:50.794 [2024-10-08 16:26:43.847084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.794 [2024-10-08 16:26:43.847199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:50.794 [2024-10-08 16:26:43.847337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.794 [2024-10-08 16:26:43.850744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.794 [2024-10-08 16:26:43.850816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:50.794 pt3 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.794 malloc4 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.794 [2024-10-08 16:26:43.908251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:50.794 [2024-10-08 16:26:43.908330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.794 [2024-10-08 16:26:43.908366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:50.794 [2024-10-08 16:26:43.908403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.794 pt4 00:18:50.794 [2024-10-08 16:26:43.911887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.794 [2024-10-08 16:26:43.911942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.794 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.794 [2024-10-08 16:26:43.916408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:50.794 [2024-10-08 16:26:43.919498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.794 [2024-10-08 16:26:43.919770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:50.794 [2024-10-08 16:26:43.919901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:50.795 [2024-10-08 16:26:43.920243] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:50.795 [2024-10-08 16:26:43.920276] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:50.795 [2024-10-08 16:26:43.920729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:50.795 [2024-10-08 16:26:43.929453] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:50.795 [2024-10-08 16:26:43.929505] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:50.795 [2024-10-08 16:26:43.929852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.795 "name": "raid_bdev1", 00:18:50.795 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:50.795 "strip_size_kb": 64, 00:18:50.795 "state": "online", 00:18:50.795 "raid_level": "raid5f", 00:18:50.795 "superblock": true, 00:18:50.795 "num_base_bdevs": 4, 00:18:50.795 "num_base_bdevs_discovered": 4, 00:18:50.795 "num_base_bdevs_operational": 4, 00:18:50.795 "base_bdevs_list": [ 00:18:50.795 { 00:18:50.795 "name": "pt1", 00:18:50.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.795 "is_configured": true, 00:18:50.795 "data_offset": 2048, 00:18:50.795 "data_size": 63488 00:18:50.795 }, 00:18:50.795 { 00:18:50.795 "name": "pt2", 00:18:50.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.795 "is_configured": true, 00:18:50.795 "data_offset": 2048, 00:18:50.795 "data_size": 63488 00:18:50.795 }, 00:18:50.795 { 00:18:50.795 "name": "pt3", 00:18:50.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:50.795 "is_configured": true, 00:18:50.795 "data_offset": 2048, 00:18:50.795 "data_size": 63488 00:18:50.795 }, 00:18:50.795 { 00:18:50.795 "name": "pt4", 00:18:50.795 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:50.795 "is_configured": true, 00:18:50.795 "data_offset": 2048, 00:18:50.795 "data_size": 63488 00:18:50.795 } 00:18:50.795 ] 00:18:50.795 }' 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.795 16:26:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 [2024-10-08 16:26:44.459777] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.372 "name": "raid_bdev1", 00:18:51.372 "aliases": [ 00:18:51.372 "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3" 00:18:51.372 ], 00:18:51.372 "product_name": "Raid Volume", 00:18:51.372 "block_size": 512, 00:18:51.372 "num_blocks": 190464, 00:18:51.372 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:51.372 "assigned_rate_limits": { 00:18:51.372 "rw_ios_per_sec": 0, 00:18:51.372 "rw_mbytes_per_sec": 0, 00:18:51.372 "r_mbytes_per_sec": 0, 00:18:51.372 "w_mbytes_per_sec": 0 00:18:51.372 }, 00:18:51.372 "claimed": false, 00:18:51.372 "zoned": false, 00:18:51.372 "supported_io_types": { 00:18:51.372 "read": true, 00:18:51.372 "write": true, 00:18:51.372 "unmap": false, 00:18:51.372 "flush": false, 00:18:51.372 "reset": true, 00:18:51.372 "nvme_admin": false, 00:18:51.372 "nvme_io": false, 00:18:51.372 "nvme_io_md": false, 00:18:51.372 "write_zeroes": true, 00:18:51.372 "zcopy": false, 00:18:51.372 "get_zone_info": false, 00:18:51.372 "zone_management": false, 00:18:51.372 "zone_append": false, 00:18:51.372 "compare": false, 00:18:51.372 "compare_and_write": false, 00:18:51.372 "abort": false, 00:18:51.372 "seek_hole": false, 00:18:51.372 "seek_data": false, 00:18:51.372 "copy": false, 00:18:51.372 "nvme_iov_md": false 00:18:51.372 }, 00:18:51.372 "driver_specific": { 00:18:51.372 "raid": { 00:18:51.372 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:51.372 "strip_size_kb": 64, 00:18:51.372 "state": "online", 00:18:51.372 "raid_level": "raid5f", 00:18:51.372 "superblock": true, 00:18:51.372 "num_base_bdevs": 4, 00:18:51.372 "num_base_bdevs_discovered": 4, 00:18:51.372 "num_base_bdevs_operational": 4, 00:18:51.372 "base_bdevs_list": [ 00:18:51.372 { 00:18:51.372 "name": "pt1", 00:18:51.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:51.372 "is_configured": true, 00:18:51.372 "data_offset": 2048, 00:18:51.372 "data_size": 63488 00:18:51.372 }, 00:18:51.372 { 00:18:51.372 "name": "pt2", 00:18:51.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.372 "is_configured": true, 00:18:51.372 "data_offset": 2048, 00:18:51.372 "data_size": 63488 00:18:51.372 }, 00:18:51.372 { 00:18:51.372 "name": "pt3", 00:18:51.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:51.372 "is_configured": true, 00:18:51.372 "data_offset": 2048, 00:18:51.372 "data_size": 63488 00:18:51.372 }, 00:18:51.372 { 00:18:51.372 "name": "pt4", 00:18:51.372 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:51.372 "is_configured": true, 00:18:51.372 "data_offset": 2048, 00:18:51.372 "data_size": 63488 00:18:51.372 } 00:18:51.372 ] 00:18:51.372 } 00:18:51.372 } 00:18:51.372 }' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:51.372 pt2 00:18:51.372 pt3 00:18:51.372 pt4' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.372 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:51.631 [2024-10-08 16:26:44.835646] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8dfb709e-09bf-4abe-9dfb-7c234e5eadd3 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8dfb709e-09bf-4abe-9dfb-7c234e5eadd3 ']' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.631 [2024-10-08 16:26:44.879397] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.631 [2024-10-08 16:26:44.879593] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.631 [2024-10-08 16:26:44.879879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.631 [2024-10-08 16:26:44.880107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.631 [2024-10-08 16:26:44.880145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.631 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 16:26:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:51.890 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 [2024-10-08 16:26:45.039462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:51.891 [2024-10-08 16:26:45.042234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:51.891 [2024-10-08 16:26:45.042451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:51.891 [2024-10-08 16:26:45.042538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:51.891 [2024-10-08 16:26:45.042614] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:51.891 [2024-10-08 16:26:45.042692] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:51.891 [2024-10-08 16:26:45.042734] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:51.891 [2024-10-08 16:26:45.042767] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:51.891 [2024-10-08 16:26:45.042790] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.891 [2024-10-08 16:26:45.042810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:51.891 request: 00:18:51.891 { 00:18:51.891 "name": "raid_bdev1", 00:18:51.891 "raid_level": "raid5f", 00:18:51.891 "base_bdevs": [ 00:18:51.891 "malloc1", 00:18:51.891 "malloc2", 00:18:51.891 "malloc3", 00:18:51.891 "malloc4" 00:18:51.891 ], 00:18:51.891 "strip_size_kb": 64, 00:18:51.891 "superblock": false, 00:18:51.891 "method": "bdev_raid_create", 00:18:51.891 "req_id": 1 00:18:51.891 } 00:18:51.891 Got JSON-RPC error response 00:18:51.891 response: 00:18:51.891 { 00:18:51.891 "code": -17, 00:18:51.891 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:51.891 } 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 [2024-10-08 16:26:45.111625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:51.891 [2024-10-08 16:26:45.111829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.891 [2024-10-08 16:26:45.111952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:51.891 [2024-10-08 16:26:45.112066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.891 [2024-10-08 16:26:45.115187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.891 [2024-10-08 16:26:45.115360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:51.891 pt1 00:18:51.891 [2024-10-08 16:26:45.115600] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:51.891 [2024-10-08 16:26:45.115695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.891 "name": "raid_bdev1", 00:18:51.891 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:51.891 "strip_size_kb": 64, 00:18:51.891 "state": "configuring", 00:18:51.891 "raid_level": "raid5f", 00:18:51.891 "superblock": true, 00:18:51.891 "num_base_bdevs": 4, 00:18:51.891 "num_base_bdevs_discovered": 1, 00:18:51.891 "num_base_bdevs_operational": 4, 00:18:51.891 "base_bdevs_list": [ 00:18:51.891 { 00:18:51.891 "name": "pt1", 00:18:51.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:51.891 "is_configured": true, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "name": null, 00:18:51.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.891 "is_configured": false, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "name": null, 00:18:51.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:51.891 "is_configured": false, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "name": null, 00:18:51.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:51.891 "is_configured": false, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 } 00:18:51.891 ] 00:18:51.891 }' 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.891 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.458 [2024-10-08 16:26:45.631866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:52.458 [2024-10-08 16:26:45.632226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.458 [2024-10-08 16:26:45.632266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:52.458 [2024-10-08 16:26:45.632285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.458 [2024-10-08 16:26:45.632909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.458 [2024-10-08 16:26:45.632941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:52.458 [2024-10-08 16:26:45.633048] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:52.458 [2024-10-08 16:26:45.633085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:52.458 pt2 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.458 [2024-10-08 16:26:45.639852] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.458 "name": "raid_bdev1", 00:18:52.458 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:52.458 "strip_size_kb": 64, 00:18:52.458 "state": "configuring", 00:18:52.458 "raid_level": "raid5f", 00:18:52.458 "superblock": true, 00:18:52.458 "num_base_bdevs": 4, 00:18:52.458 "num_base_bdevs_discovered": 1, 00:18:52.458 "num_base_bdevs_operational": 4, 00:18:52.458 "base_bdevs_list": [ 00:18:52.458 { 00:18:52.458 "name": "pt1", 00:18:52.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:52.458 "is_configured": true, 00:18:52.458 "data_offset": 2048, 00:18:52.458 "data_size": 63488 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "name": null, 00:18:52.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.458 "is_configured": false, 00:18:52.458 "data_offset": 0, 00:18:52.458 "data_size": 63488 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "name": null, 00:18:52.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:52.458 "is_configured": false, 00:18:52.458 "data_offset": 2048, 00:18:52.458 "data_size": 63488 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "name": null, 00:18:52.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:52.458 "is_configured": false, 00:18:52.458 "data_offset": 2048, 00:18:52.458 "data_size": 63488 00:18:52.458 } 00:18:52.458 ] 00:18:52.458 }' 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.458 16:26:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.026 [2024-10-08 16:26:46.184421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.026 [2024-10-08 16:26:46.184736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.026 [2024-10-08 16:26:46.184814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:53.026 [2024-10-08 16:26:46.184938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.026 [2024-10-08 16:26:46.185547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.026 [2024-10-08 16:26:46.185573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.026 [2024-10-08 16:26:46.185687] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:53.026 [2024-10-08 16:26:46.185719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.026 pt2 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.026 [2024-10-08 16:26:46.192362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:53.026 [2024-10-08 16:26:46.192589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.026 [2024-10-08 16:26:46.192673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:53.026 [2024-10-08 16:26:46.192794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.026 [2024-10-08 16:26:46.193278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.026 [2024-10-08 16:26:46.193437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:53.026 [2024-10-08 16:26:46.193646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:53.026 [2024-10-08 16:26:46.193783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:53.026 pt3 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.026 [2024-10-08 16:26:46.200342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:53.026 [2024-10-08 16:26:46.200552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.026 [2024-10-08 16:26:46.200720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:53.026 [2024-10-08 16:26:46.200745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.026 [2024-10-08 16:26:46.201221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.026 [2024-10-08 16:26:46.201256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:53.026 [2024-10-08 16:26:46.201335] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:53.026 [2024-10-08 16:26:46.201363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:53.026 [2024-10-08 16:26:46.201581] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:53.026 [2024-10-08 16:26:46.201598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:53.026 [2024-10-08 16:26:46.201918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:53.026 [2024-10-08 16:26:46.208430] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:53.026 pt4 00:18:53.026 [2024-10-08 16:26:46.208652] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:53.026 [2024-10-08 16:26:46.208896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.026 "name": "raid_bdev1", 00:18:53.026 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:53.026 "strip_size_kb": 64, 00:18:53.026 "state": "online", 00:18:53.026 "raid_level": "raid5f", 00:18:53.026 "superblock": true, 00:18:53.026 "num_base_bdevs": 4, 00:18:53.026 "num_base_bdevs_discovered": 4, 00:18:53.026 "num_base_bdevs_operational": 4, 00:18:53.026 "base_bdevs_list": [ 00:18:53.026 { 00:18:53.026 "name": "pt1", 00:18:53.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.026 "is_configured": true, 00:18:53.026 "data_offset": 2048, 00:18:53.026 "data_size": 63488 00:18:53.026 }, 00:18:53.026 { 00:18:53.026 "name": "pt2", 00:18:53.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.026 "is_configured": true, 00:18:53.026 "data_offset": 2048, 00:18:53.026 "data_size": 63488 00:18:53.026 }, 00:18:53.026 { 00:18:53.026 "name": "pt3", 00:18:53.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:53.026 "is_configured": true, 00:18:53.026 "data_offset": 2048, 00:18:53.026 "data_size": 63488 00:18:53.026 }, 00:18:53.026 { 00:18:53.026 "name": "pt4", 00:18:53.026 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:53.026 "is_configured": true, 00:18:53.026 "data_offset": 2048, 00:18:53.026 "data_size": 63488 00:18:53.026 } 00:18:53.026 ] 00:18:53.026 }' 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.026 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:53.594 [2024-10-08 16:26:46.732773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.594 "name": "raid_bdev1", 00:18:53.594 "aliases": [ 00:18:53.594 "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3" 00:18:53.594 ], 00:18:53.594 "product_name": "Raid Volume", 00:18:53.594 "block_size": 512, 00:18:53.594 "num_blocks": 190464, 00:18:53.594 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:53.594 "assigned_rate_limits": { 00:18:53.594 "rw_ios_per_sec": 0, 00:18:53.594 "rw_mbytes_per_sec": 0, 00:18:53.594 "r_mbytes_per_sec": 0, 00:18:53.594 "w_mbytes_per_sec": 0 00:18:53.594 }, 00:18:53.594 "claimed": false, 00:18:53.594 "zoned": false, 00:18:53.594 "supported_io_types": { 00:18:53.594 "read": true, 00:18:53.594 "write": true, 00:18:53.594 "unmap": false, 00:18:53.594 "flush": false, 00:18:53.594 "reset": true, 00:18:53.594 "nvme_admin": false, 00:18:53.594 "nvme_io": false, 00:18:53.594 "nvme_io_md": false, 00:18:53.594 "write_zeroes": true, 00:18:53.594 "zcopy": false, 00:18:53.594 "get_zone_info": false, 00:18:53.594 "zone_management": false, 00:18:53.594 "zone_append": false, 00:18:53.594 "compare": false, 00:18:53.594 "compare_and_write": false, 00:18:53.594 "abort": false, 00:18:53.594 "seek_hole": false, 00:18:53.594 "seek_data": false, 00:18:53.594 "copy": false, 00:18:53.594 "nvme_iov_md": false 00:18:53.594 }, 00:18:53.594 "driver_specific": { 00:18:53.594 "raid": { 00:18:53.594 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:53.594 "strip_size_kb": 64, 00:18:53.594 "state": "online", 00:18:53.594 "raid_level": "raid5f", 00:18:53.594 "superblock": true, 00:18:53.594 "num_base_bdevs": 4, 00:18:53.594 "num_base_bdevs_discovered": 4, 00:18:53.594 "num_base_bdevs_operational": 4, 00:18:53.594 "base_bdevs_list": [ 00:18:53.594 { 00:18:53.594 "name": "pt1", 00:18:53.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.594 "is_configured": true, 00:18:53.594 "data_offset": 2048, 00:18:53.594 "data_size": 63488 00:18:53.594 }, 00:18:53.594 { 00:18:53.594 "name": "pt2", 00:18:53.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.594 "is_configured": true, 00:18:53.594 "data_offset": 2048, 00:18:53.594 "data_size": 63488 00:18:53.594 }, 00:18:53.594 { 00:18:53.594 "name": "pt3", 00:18:53.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:53.594 "is_configured": true, 00:18:53.594 "data_offset": 2048, 00:18:53.594 "data_size": 63488 00:18:53.594 }, 00:18:53.594 { 00:18:53.594 "name": "pt4", 00:18:53.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:53.594 "is_configured": true, 00:18:53.594 "data_offset": 2048, 00:18:53.594 "data_size": 63488 00:18:53.594 } 00:18:53.594 ] 00:18:53.594 } 00:18:53.594 } 00:18:53.594 }' 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:53.594 pt2 00:18:53.594 pt3 00:18:53.594 pt4' 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.594 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.853 16:26:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.853 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.854 [2024-10-08 16:26:47.112813] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8dfb709e-09bf-4abe-9dfb-7c234e5eadd3 '!=' 8dfb709e-09bf-4abe-9dfb-7c234e5eadd3 ']' 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.854 [2024-10-08 16:26:47.164677] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.854 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.112 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.112 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.112 "name": "raid_bdev1", 00:18:54.112 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:54.112 "strip_size_kb": 64, 00:18:54.112 "state": "online", 00:18:54.112 "raid_level": "raid5f", 00:18:54.112 "superblock": true, 00:18:54.112 "num_base_bdevs": 4, 00:18:54.112 "num_base_bdevs_discovered": 3, 00:18:54.112 "num_base_bdevs_operational": 3, 00:18:54.112 "base_bdevs_list": [ 00:18:54.112 { 00:18:54.112 "name": null, 00:18:54.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.112 "is_configured": false, 00:18:54.112 "data_offset": 0, 00:18:54.112 "data_size": 63488 00:18:54.112 }, 00:18:54.112 { 00:18:54.112 "name": "pt2", 00:18:54.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.112 "is_configured": true, 00:18:54.112 "data_offset": 2048, 00:18:54.112 "data_size": 63488 00:18:54.112 }, 00:18:54.112 { 00:18:54.112 "name": "pt3", 00:18:54.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.112 "is_configured": true, 00:18:54.112 "data_offset": 2048, 00:18:54.112 "data_size": 63488 00:18:54.112 }, 00:18:54.112 { 00:18:54.112 "name": "pt4", 00:18:54.112 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.112 "is_configured": true, 00:18:54.112 "data_offset": 2048, 00:18:54.112 "data_size": 63488 00:18:54.113 } 00:18:54.113 ] 00:18:54.113 }' 00:18:54.113 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.113 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.371 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:54.371 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.371 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 [2024-10-08 16:26:47.692791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.630 [2024-10-08 16:26:47.692867] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.630 [2024-10-08 16:26:47.692972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.630 [2024-10-08 16:26:47.693082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.630 [2024-10-08 16:26:47.693100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 [2024-10-08 16:26:47.784767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:54.630 [2024-10-08 16:26:47.785004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.630 [2024-10-08 16:26:47.785047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:54.630 [2024-10-08 16:26:47.785062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.630 [2024-10-08 16:26:47.788012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.630 pt2 00:18:54.630 [2024-10-08 16:26:47.788179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:54.630 [2024-10-08 16:26:47.788294] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:54.630 [2024-10-08 16:26:47.788356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.630 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.630 "name": "raid_bdev1", 00:18:54.630 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:54.630 "strip_size_kb": 64, 00:18:54.630 "state": "configuring", 00:18:54.630 "raid_level": "raid5f", 00:18:54.631 "superblock": true, 00:18:54.631 "num_base_bdevs": 4, 00:18:54.631 "num_base_bdevs_discovered": 1, 00:18:54.631 "num_base_bdevs_operational": 3, 00:18:54.631 "base_bdevs_list": [ 00:18:54.631 { 00:18:54.631 "name": null, 00:18:54.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.631 "is_configured": false, 00:18:54.631 "data_offset": 2048, 00:18:54.631 "data_size": 63488 00:18:54.631 }, 00:18:54.631 { 00:18:54.631 "name": "pt2", 00:18:54.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.631 "is_configured": true, 00:18:54.631 "data_offset": 2048, 00:18:54.631 "data_size": 63488 00:18:54.631 }, 00:18:54.631 { 00:18:54.631 "name": null, 00:18:54.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.631 "is_configured": false, 00:18:54.631 "data_offset": 2048, 00:18:54.631 "data_size": 63488 00:18:54.631 }, 00:18:54.631 { 00:18:54.631 "name": null, 00:18:54.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.631 "is_configured": false, 00:18:54.631 "data_offset": 2048, 00:18:54.631 "data_size": 63488 00:18:54.631 } 00:18:54.631 ] 00:18:54.631 }' 00:18:54.631 16:26:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.631 16:26:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.206 [2024-10-08 16:26:48.312930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.206 [2024-10-08 16:26:48.313125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.206 [2024-10-08 16:26:48.313200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:55.206 [2024-10-08 16:26:48.313314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.206 [2024-10-08 16:26:48.313836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.206 [2024-10-08 16:26:48.313878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.206 [2024-10-08 16:26:48.313998] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:55.206 [2024-10-08 16:26:48.314034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:55.206 pt3 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.206 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.207 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.207 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.207 "name": "raid_bdev1", 00:18:55.207 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:55.207 "strip_size_kb": 64, 00:18:55.207 "state": "configuring", 00:18:55.207 "raid_level": "raid5f", 00:18:55.207 "superblock": true, 00:18:55.207 "num_base_bdevs": 4, 00:18:55.207 "num_base_bdevs_discovered": 2, 00:18:55.207 "num_base_bdevs_operational": 3, 00:18:55.207 "base_bdevs_list": [ 00:18:55.207 { 00:18:55.207 "name": null, 00:18:55.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.207 "is_configured": false, 00:18:55.207 "data_offset": 2048, 00:18:55.207 "data_size": 63488 00:18:55.207 }, 00:18:55.207 { 00:18:55.207 "name": "pt2", 00:18:55.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.207 "is_configured": true, 00:18:55.207 "data_offset": 2048, 00:18:55.207 "data_size": 63488 00:18:55.207 }, 00:18:55.207 { 00:18:55.207 "name": "pt3", 00:18:55.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.207 "is_configured": true, 00:18:55.207 "data_offset": 2048, 00:18:55.207 "data_size": 63488 00:18:55.207 }, 00:18:55.207 { 00:18:55.207 "name": null, 00:18:55.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.207 "is_configured": false, 00:18:55.207 "data_offset": 2048, 00:18:55.207 "data_size": 63488 00:18:55.207 } 00:18:55.207 ] 00:18:55.207 }' 00:18:55.207 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.207 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.775 [2024-10-08 16:26:48.817116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:55.775 [2024-10-08 16:26:48.817377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.775 [2024-10-08 16:26:48.817423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:55.775 [2024-10-08 16:26:48.817445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.775 [2024-10-08 16:26:48.818089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.775 [2024-10-08 16:26:48.818125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:55.775 [2024-10-08 16:26:48.818228] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:55.775 [2024-10-08 16:26:48.818260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:55.775 [2024-10-08 16:26:48.818429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:55.775 [2024-10-08 16:26:48.818444] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:55.775 [2024-10-08 16:26:48.818760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:55.775 pt4 00:18:55.775 [2024-10-08 16:26:48.825195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:55.775 [2024-10-08 16:26:48.825228] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:55.775 [2024-10-08 16:26:48.825575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.775 "name": "raid_bdev1", 00:18:55.775 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:55.775 "strip_size_kb": 64, 00:18:55.775 "state": "online", 00:18:55.775 "raid_level": "raid5f", 00:18:55.775 "superblock": true, 00:18:55.775 "num_base_bdevs": 4, 00:18:55.775 "num_base_bdevs_discovered": 3, 00:18:55.775 "num_base_bdevs_operational": 3, 00:18:55.775 "base_bdevs_list": [ 00:18:55.775 { 00:18:55.775 "name": null, 00:18:55.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.775 "is_configured": false, 00:18:55.775 "data_offset": 2048, 00:18:55.775 "data_size": 63488 00:18:55.775 }, 00:18:55.775 { 00:18:55.775 "name": "pt2", 00:18:55.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.775 "is_configured": true, 00:18:55.775 "data_offset": 2048, 00:18:55.775 "data_size": 63488 00:18:55.775 }, 00:18:55.775 { 00:18:55.775 "name": "pt3", 00:18:55.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.775 "is_configured": true, 00:18:55.775 "data_offset": 2048, 00:18:55.775 "data_size": 63488 00:18:55.775 }, 00:18:55.775 { 00:18:55.775 "name": "pt4", 00:18:55.775 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.775 "is_configured": true, 00:18:55.775 "data_offset": 2048, 00:18:55.775 "data_size": 63488 00:18:55.775 } 00:18:55.775 ] 00:18:55.775 }' 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.775 16:26:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.034 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:56.034 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.034 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.034 [2024-10-08 16:26:49.352996] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.034 [2024-10-08 16:26:49.353299] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.034 [2024-10-08 16:26:49.353425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.034 [2024-10-08 16:26:49.353535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.034 [2024-10-08 16:26:49.353562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 [2024-10-08 16:26:49.424978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:56.293 [2024-10-08 16:26:49.425064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.293 [2024-10-08 16:26:49.425090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:56.293 [2024-10-08 16:26:49.425106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.293 [2024-10-08 16:26:49.427979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.293 [2024-10-08 16:26:49.428026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:56.293 [2024-10-08 16:26:49.428127] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:56.293 [2024-10-08 16:26:49.428196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:56.293 [2024-10-08 16:26:49.428352] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:56.293 [2024-10-08 16:26:49.428390] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.293 [2024-10-08 16:26:49.428416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:56.293 [2024-10-08 16:26:49.428484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.293 [2024-10-08 16:26:49.428645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:56.293 pt1 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.293 "name": "raid_bdev1", 00:18:56.293 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:56.293 "strip_size_kb": 64, 00:18:56.293 "state": "configuring", 00:18:56.293 "raid_level": "raid5f", 00:18:56.293 "superblock": true, 00:18:56.293 "num_base_bdevs": 4, 00:18:56.293 "num_base_bdevs_discovered": 2, 00:18:56.293 "num_base_bdevs_operational": 3, 00:18:56.293 "base_bdevs_list": [ 00:18:56.293 { 00:18:56.293 "name": null, 00:18:56.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.293 "is_configured": false, 00:18:56.293 "data_offset": 2048, 00:18:56.293 "data_size": 63488 00:18:56.293 }, 00:18:56.293 { 00:18:56.293 "name": "pt2", 00:18:56.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.293 "is_configured": true, 00:18:56.293 "data_offset": 2048, 00:18:56.293 "data_size": 63488 00:18:56.293 }, 00:18:56.293 { 00:18:56.293 "name": "pt3", 00:18:56.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.293 "is_configured": true, 00:18:56.293 "data_offset": 2048, 00:18:56.293 "data_size": 63488 00:18:56.293 }, 00:18:56.293 { 00:18:56.293 "name": null, 00:18:56.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.293 "is_configured": false, 00:18:56.293 "data_offset": 2048, 00:18:56.293 "data_size": 63488 00:18:56.293 } 00:18:56.293 ] 00:18:56.293 }' 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.293 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.860 16:26:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.860 [2024-10-08 16:26:49.993192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:56.860 [2024-10-08 16:26:49.993420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.860 [2024-10-08 16:26:49.993514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:56.860 [2024-10-08 16:26:49.993668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.860 [2024-10-08 16:26:49.994263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.860 [2024-10-08 16:26:49.994413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:56.860 [2024-10-08 16:26:49.994633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:56.860 [2024-10-08 16:26:49.994790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:56.860 [2024-10-08 16:26:49.994982] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:56.860 [2024-10-08 16:26:49.994999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:56.860 [2024-10-08 16:26:49.995305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:56.860 pt4 00:18:56.860 [2024-10-08 16:26:50.001804] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:56.860 [2024-10-08 16:26:50.001837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:56.861 [2024-10-08 16:26:50.002166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.861 "name": "raid_bdev1", 00:18:56.861 "uuid": "8dfb709e-09bf-4abe-9dfb-7c234e5eadd3", 00:18:56.861 "strip_size_kb": 64, 00:18:56.861 "state": "online", 00:18:56.861 "raid_level": "raid5f", 00:18:56.861 "superblock": true, 00:18:56.861 "num_base_bdevs": 4, 00:18:56.861 "num_base_bdevs_discovered": 3, 00:18:56.861 "num_base_bdevs_operational": 3, 00:18:56.861 "base_bdevs_list": [ 00:18:56.861 { 00:18:56.861 "name": null, 00:18:56.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.861 "is_configured": false, 00:18:56.861 "data_offset": 2048, 00:18:56.861 "data_size": 63488 00:18:56.861 }, 00:18:56.861 { 00:18:56.861 "name": "pt2", 00:18:56.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.861 "is_configured": true, 00:18:56.861 "data_offset": 2048, 00:18:56.861 "data_size": 63488 00:18:56.861 }, 00:18:56.861 { 00:18:56.861 "name": "pt3", 00:18:56.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.861 "is_configured": true, 00:18:56.861 "data_offset": 2048, 00:18:56.861 "data_size": 63488 00:18:56.861 }, 00:18:56.861 { 00:18:56.861 "name": "pt4", 00:18:56.861 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.861 "is_configured": true, 00:18:56.861 "data_offset": 2048, 00:18:56.861 "data_size": 63488 00:18:56.861 } 00:18:56.861 ] 00:18:56.861 }' 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.861 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.428 [2024-10-08 16:26:50.577823] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8dfb709e-09bf-4abe-9dfb-7c234e5eadd3 '!=' 8dfb709e-09bf-4abe-9dfb-7c234e5eadd3 ']' 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84900 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84900 ']' 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84900 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.428 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84900 00:18:57.428 killing process with pid 84900 00:18:57.429 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:57.429 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:57.429 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84900' 00:18:57.429 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84900 00:18:57.429 [2024-10-08 16:26:50.658981] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.429 16:26:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84900 00:18:57.429 [2024-10-08 16:26:50.659097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.429 [2024-10-08 16:26:50.659198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.429 [2024-10-08 16:26:50.659220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:57.996 [2024-10-08 16:26:51.013792] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.932 ************************************ 00:18:58.932 END TEST raid5f_superblock_test 00:18:58.932 ************************************ 00:18:58.932 16:26:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:58.932 00:18:58.932 real 0m9.560s 00:18:58.932 user 0m15.489s 00:18:58.932 sys 0m1.455s 00:18:58.932 16:26:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.932 16:26:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.932 16:26:52 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:58.932 16:26:52 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:58.932 16:26:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:58.932 16:26:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.932 16:26:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.932 ************************************ 00:18:58.932 START TEST raid5f_rebuild_test 00:18:58.932 ************************************ 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:58.932 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85391 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:59.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85391 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85391 ']' 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.191 16:26:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.191 [2024-10-08 16:26:52.374279] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:59.191 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:59.191 Zero copy mechanism will not be used. 00:18:59.191 [2024-10-08 16:26:52.375565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85391 ] 00:18:59.450 [2024-10-08 16:26:52.557654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.711 [2024-10-08 16:26:52.788356] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.711 [2024-10-08 16:26:52.989260] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.711 [2024-10-08 16:26:52.989347] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.278 BaseBdev1_malloc 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.278 [2024-10-08 16:26:53.381923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:00.278 [2024-10-08 16:26:53.382888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.278 [2024-10-08 16:26:53.383048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:00.278 [2024-10-08 16:26:53.383271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.278 [2024-10-08 16:26:53.386375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.278 [2024-10-08 16:26:53.386677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:00.278 BaseBdev1 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.278 BaseBdev2_malloc 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.278 [2024-10-08 16:26:53.450058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:00.278 [2024-10-08 16:26:53.450564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.278 [2024-10-08 16:26:53.450725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:00.278 [2024-10-08 16:26:53.450964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.278 [2024-10-08 16:26:53.453843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.278 BaseBdev2 00:19:00.278 [2024-10-08 16:26:53.454101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.278 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.279 BaseBdev3_malloc 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.279 [2024-10-08 16:26:53.501137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:00.279 [2024-10-08 16:26:53.501486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.279 [2024-10-08 16:26:53.501734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:00.279 [2024-10-08 16:26:53.501950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.279 [2024-10-08 16:26:53.504627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.279 BaseBdev3 00:19:00.279 [2024-10-08 16:26:53.504884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.279 BaseBdev4_malloc 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.279 [2024-10-08 16:26:53.547744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:00.279 [2024-10-08 16:26:53.547944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.279 [2024-10-08 16:26:53.548213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:00.279 [2024-10-08 16:26:53.548349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.279 [2024-10-08 16:26:53.551114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.279 BaseBdev4 00:19:00.279 [2024-10-08 16:26:53.551372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.279 spare_malloc 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.279 spare_delay 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.279 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.537 [2024-10-08 16:26:53.602265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.537 [2024-10-08 16:26:53.602679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.537 [2024-10-08 16:26:53.602722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:00.537 [2024-10-08 16:26:53.602742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.537 [2024-10-08 16:26:53.605437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.537 spare 00:19:00.537 [2024-10-08 16:26:53.605647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.537 [2024-10-08 16:26:53.610454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.537 [2024-10-08 16:26:53.612781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.537 [2024-10-08 16:26:53.613014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.537 [2024-10-08 16:26:53.613106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:00.537 [2024-10-08 16:26:53.613225] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:00.537 [2024-10-08 16:26:53.613244] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:00.537 [2024-10-08 16:26:53.613576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:00.537 [2024-10-08 16:26:53.620039] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:00.537 [2024-10-08 16:26:53.620206] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:00.537 [2024-10-08 16:26:53.620488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.537 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.537 "name": "raid_bdev1", 00:19:00.537 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:00.537 "strip_size_kb": 64, 00:19:00.537 "state": "online", 00:19:00.537 "raid_level": "raid5f", 00:19:00.537 "superblock": false, 00:19:00.537 "num_base_bdevs": 4, 00:19:00.537 "num_base_bdevs_discovered": 4, 00:19:00.537 "num_base_bdevs_operational": 4, 00:19:00.537 "base_bdevs_list": [ 00:19:00.537 { 00:19:00.537 "name": "BaseBdev1", 00:19:00.537 "uuid": "bae16272-8860-59f3-b2d5-7fb45af20b16", 00:19:00.537 "is_configured": true, 00:19:00.537 "data_offset": 0, 00:19:00.537 "data_size": 65536 00:19:00.537 }, 00:19:00.537 { 00:19:00.537 "name": "BaseBdev2", 00:19:00.538 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:00.538 "is_configured": true, 00:19:00.538 "data_offset": 0, 00:19:00.538 "data_size": 65536 00:19:00.538 }, 00:19:00.538 { 00:19:00.538 "name": "BaseBdev3", 00:19:00.538 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:00.538 "is_configured": true, 00:19:00.538 "data_offset": 0, 00:19:00.538 "data_size": 65536 00:19:00.538 }, 00:19:00.538 { 00:19:00.538 "name": "BaseBdev4", 00:19:00.538 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:00.538 "is_configured": true, 00:19:00.538 "data_offset": 0, 00:19:00.538 "data_size": 65536 00:19:00.538 } 00:19:00.538 ] 00:19:00.538 }' 00:19:00.538 16:26:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.538 16:26:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.113 [2024-10-08 16:26:54.143932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.113 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:01.371 [2024-10-08 16:26:54.479817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:01.371 /dev/nbd0 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.371 1+0 records in 00:19:01.371 1+0 records out 00:19:01.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357608 s, 11.5 MB/s 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:01.371 16:26:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:01.938 512+0 records in 00:19:01.938 512+0 records out 00:19:01.938 100663296 bytes (101 MB, 96 MiB) copied, 0.607408 s, 166 MB/s 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.938 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:02.196 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.196 [2024-10-08 16:26:55.469143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 [2024-10-08 16:26:55.481006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.455 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.455 "name": "raid_bdev1", 00:19:02.455 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:02.455 "strip_size_kb": 64, 00:19:02.455 "state": "online", 00:19:02.455 "raid_level": "raid5f", 00:19:02.455 "superblock": false, 00:19:02.455 "num_base_bdevs": 4, 00:19:02.455 "num_base_bdevs_discovered": 3, 00:19:02.455 "num_base_bdevs_operational": 3, 00:19:02.455 "base_bdevs_list": [ 00:19:02.455 { 00:19:02.455 "name": null, 00:19:02.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.455 "is_configured": false, 00:19:02.455 "data_offset": 0, 00:19:02.455 "data_size": 65536 00:19:02.455 }, 00:19:02.455 { 00:19:02.455 "name": "BaseBdev2", 00:19:02.455 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:02.455 "is_configured": true, 00:19:02.455 "data_offset": 0, 00:19:02.455 "data_size": 65536 00:19:02.455 }, 00:19:02.455 { 00:19:02.455 "name": "BaseBdev3", 00:19:02.455 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:02.455 "is_configured": true, 00:19:02.455 "data_offset": 0, 00:19:02.455 "data_size": 65536 00:19:02.455 }, 00:19:02.455 { 00:19:02.455 "name": "BaseBdev4", 00:19:02.455 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:02.455 "is_configured": true, 00:19:02.455 "data_offset": 0, 00:19:02.455 "data_size": 65536 00:19:02.455 } 00:19:02.455 ] 00:19:02.455 }' 00:19:02.455 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.455 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.713 16:26:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.713 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.713 16:26:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.713 [2024-10-08 16:26:55.997214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.713 [2024-10-08 16:26:56.010297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:02.713 16:26:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.713 16:26:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:02.713 [2024-10-08 16:26:56.018867] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.089 "name": "raid_bdev1", 00:19:04.089 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:04.089 "strip_size_kb": 64, 00:19:04.089 "state": "online", 00:19:04.089 "raid_level": "raid5f", 00:19:04.089 "superblock": false, 00:19:04.089 "num_base_bdevs": 4, 00:19:04.089 "num_base_bdevs_discovered": 4, 00:19:04.089 "num_base_bdevs_operational": 4, 00:19:04.089 "process": { 00:19:04.089 "type": "rebuild", 00:19:04.089 "target": "spare", 00:19:04.089 "progress": { 00:19:04.089 "blocks": 17280, 00:19:04.089 "percent": 8 00:19:04.089 } 00:19:04.089 }, 00:19:04.089 "base_bdevs_list": [ 00:19:04.089 { 00:19:04.089 "name": "spare", 00:19:04.089 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:04.089 "is_configured": true, 00:19:04.089 "data_offset": 0, 00:19:04.089 "data_size": 65536 00:19:04.089 }, 00:19:04.089 { 00:19:04.089 "name": "BaseBdev2", 00:19:04.089 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:04.089 "is_configured": true, 00:19:04.089 "data_offset": 0, 00:19:04.089 "data_size": 65536 00:19:04.089 }, 00:19:04.089 { 00:19:04.089 "name": "BaseBdev3", 00:19:04.089 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:04.089 "is_configured": true, 00:19:04.089 "data_offset": 0, 00:19:04.089 "data_size": 65536 00:19:04.089 }, 00:19:04.089 { 00:19:04.089 "name": "BaseBdev4", 00:19:04.089 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:04.089 "is_configured": true, 00:19:04.089 "data_offset": 0, 00:19:04.089 "data_size": 65536 00:19:04.089 } 00:19:04.089 ] 00:19:04.089 }' 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.089 [2024-10-08 16:26:57.168313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.089 [2024-10-08 16:26:57.229346] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:04.089 [2024-10-08 16:26:57.229642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.089 [2024-10-08 16:26:57.229674] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.089 [2024-10-08 16:26:57.229691] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.089 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.089 "name": "raid_bdev1", 00:19:04.089 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:04.089 "strip_size_kb": 64, 00:19:04.089 "state": "online", 00:19:04.089 "raid_level": "raid5f", 00:19:04.089 "superblock": false, 00:19:04.089 "num_base_bdevs": 4, 00:19:04.089 "num_base_bdevs_discovered": 3, 00:19:04.089 "num_base_bdevs_operational": 3, 00:19:04.089 "base_bdevs_list": [ 00:19:04.089 { 00:19:04.089 "name": null, 00:19:04.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.090 "is_configured": false, 00:19:04.090 "data_offset": 0, 00:19:04.090 "data_size": 65536 00:19:04.090 }, 00:19:04.090 { 00:19:04.090 "name": "BaseBdev2", 00:19:04.090 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:04.090 "is_configured": true, 00:19:04.090 "data_offset": 0, 00:19:04.090 "data_size": 65536 00:19:04.090 }, 00:19:04.090 { 00:19:04.090 "name": "BaseBdev3", 00:19:04.090 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:04.090 "is_configured": true, 00:19:04.090 "data_offset": 0, 00:19:04.090 "data_size": 65536 00:19:04.090 }, 00:19:04.090 { 00:19:04.090 "name": "BaseBdev4", 00:19:04.090 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:04.090 "is_configured": true, 00:19:04.090 "data_offset": 0, 00:19:04.090 "data_size": 65536 00:19:04.090 } 00:19:04.090 ] 00:19:04.090 }' 00:19:04.090 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.090 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.706 "name": "raid_bdev1", 00:19:04.706 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:04.706 "strip_size_kb": 64, 00:19:04.706 "state": "online", 00:19:04.706 "raid_level": "raid5f", 00:19:04.706 "superblock": false, 00:19:04.706 "num_base_bdevs": 4, 00:19:04.706 "num_base_bdevs_discovered": 3, 00:19:04.706 "num_base_bdevs_operational": 3, 00:19:04.706 "base_bdevs_list": [ 00:19:04.706 { 00:19:04.706 "name": null, 00:19:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.706 "is_configured": false, 00:19:04.706 "data_offset": 0, 00:19:04.706 "data_size": 65536 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "name": "BaseBdev2", 00:19:04.706 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:04.706 "is_configured": true, 00:19:04.706 "data_offset": 0, 00:19:04.706 "data_size": 65536 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "name": "BaseBdev3", 00:19:04.706 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:04.706 "is_configured": true, 00:19:04.706 "data_offset": 0, 00:19:04.706 "data_size": 65536 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "name": "BaseBdev4", 00:19:04.706 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:04.706 "is_configured": true, 00:19:04.706 "data_offset": 0, 00:19:04.706 "data_size": 65536 00:19:04.706 } 00:19:04.706 ] 00:19:04.706 }' 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.706 [2024-10-08 16:26:57.918128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.706 [2024-10-08 16:26:57.931192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.706 16:26:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:04.706 [2024-10-08 16:26:57.940080] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.641 16:26:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.899 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.899 "name": "raid_bdev1", 00:19:05.899 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:05.899 "strip_size_kb": 64, 00:19:05.899 "state": "online", 00:19:05.899 "raid_level": "raid5f", 00:19:05.899 "superblock": false, 00:19:05.899 "num_base_bdevs": 4, 00:19:05.899 "num_base_bdevs_discovered": 4, 00:19:05.899 "num_base_bdevs_operational": 4, 00:19:05.899 "process": { 00:19:05.899 "type": "rebuild", 00:19:05.899 "target": "spare", 00:19:05.899 "progress": { 00:19:05.899 "blocks": 17280, 00:19:05.899 "percent": 8 00:19:05.899 } 00:19:05.899 }, 00:19:05.899 "base_bdevs_list": [ 00:19:05.899 { 00:19:05.899 "name": "spare", 00:19:05.899 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:05.899 "is_configured": true, 00:19:05.899 "data_offset": 0, 00:19:05.899 "data_size": 65536 00:19:05.899 }, 00:19:05.899 { 00:19:05.899 "name": "BaseBdev2", 00:19:05.899 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:05.899 "is_configured": true, 00:19:05.899 "data_offset": 0, 00:19:05.899 "data_size": 65536 00:19:05.899 }, 00:19:05.899 { 00:19:05.899 "name": "BaseBdev3", 00:19:05.899 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:05.899 "is_configured": true, 00:19:05.899 "data_offset": 0, 00:19:05.899 "data_size": 65536 00:19:05.899 }, 00:19:05.899 { 00:19:05.899 "name": "BaseBdev4", 00:19:05.899 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:05.899 "is_configured": true, 00:19:05.899 "data_offset": 0, 00:19:05.899 "data_size": 65536 00:19:05.899 } 00:19:05.899 ] 00:19:05.899 }' 00:19:05.899 16:26:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.899 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.899 "name": "raid_bdev1", 00:19:05.899 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:05.899 "strip_size_kb": 64, 00:19:05.899 "state": "online", 00:19:05.899 "raid_level": "raid5f", 00:19:05.899 "superblock": false, 00:19:05.899 "num_base_bdevs": 4, 00:19:05.899 "num_base_bdevs_discovered": 4, 00:19:05.899 "num_base_bdevs_operational": 4, 00:19:05.899 "process": { 00:19:05.899 "type": "rebuild", 00:19:05.899 "target": "spare", 00:19:05.899 "progress": { 00:19:05.899 "blocks": 21120, 00:19:05.899 "percent": 10 00:19:05.899 } 00:19:05.899 }, 00:19:05.899 "base_bdevs_list": [ 00:19:05.899 { 00:19:05.899 "name": "spare", 00:19:05.899 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:05.899 "is_configured": true, 00:19:05.899 "data_offset": 0, 00:19:05.899 "data_size": 65536 00:19:05.899 }, 00:19:05.899 { 00:19:05.899 "name": "BaseBdev2", 00:19:05.900 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:05.900 "is_configured": true, 00:19:05.900 "data_offset": 0, 00:19:05.900 "data_size": 65536 00:19:05.900 }, 00:19:05.900 { 00:19:05.900 "name": "BaseBdev3", 00:19:05.900 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:05.900 "is_configured": true, 00:19:05.900 "data_offset": 0, 00:19:05.900 "data_size": 65536 00:19:05.900 }, 00:19:05.900 { 00:19:05.900 "name": "BaseBdev4", 00:19:05.900 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:05.900 "is_configured": true, 00:19:05.900 "data_offset": 0, 00:19:05.900 "data_size": 65536 00:19:05.900 } 00:19:05.900 ] 00:19:05.900 }' 00:19:05.900 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.900 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.900 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.157 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.157 16:26:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.092 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.092 "name": "raid_bdev1", 00:19:07.092 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:07.092 "strip_size_kb": 64, 00:19:07.092 "state": "online", 00:19:07.092 "raid_level": "raid5f", 00:19:07.092 "superblock": false, 00:19:07.092 "num_base_bdevs": 4, 00:19:07.092 "num_base_bdevs_discovered": 4, 00:19:07.092 "num_base_bdevs_operational": 4, 00:19:07.092 "process": { 00:19:07.092 "type": "rebuild", 00:19:07.092 "target": "spare", 00:19:07.092 "progress": { 00:19:07.092 "blocks": 44160, 00:19:07.092 "percent": 22 00:19:07.092 } 00:19:07.092 }, 00:19:07.092 "base_bdevs_list": [ 00:19:07.092 { 00:19:07.092 "name": "spare", 00:19:07.092 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:07.092 "is_configured": true, 00:19:07.092 "data_offset": 0, 00:19:07.092 "data_size": 65536 00:19:07.092 }, 00:19:07.093 { 00:19:07.093 "name": "BaseBdev2", 00:19:07.093 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:07.093 "is_configured": true, 00:19:07.093 "data_offset": 0, 00:19:07.093 "data_size": 65536 00:19:07.093 }, 00:19:07.093 { 00:19:07.093 "name": "BaseBdev3", 00:19:07.093 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:07.093 "is_configured": true, 00:19:07.093 "data_offset": 0, 00:19:07.093 "data_size": 65536 00:19:07.093 }, 00:19:07.093 { 00:19:07.093 "name": "BaseBdev4", 00:19:07.093 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:07.093 "is_configured": true, 00:19:07.093 "data_offset": 0, 00:19:07.093 "data_size": 65536 00:19:07.093 } 00:19:07.093 ] 00:19:07.093 }' 00:19:07.093 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.093 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.093 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.357 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.357 16:27:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.310 "name": "raid_bdev1", 00:19:08.310 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:08.310 "strip_size_kb": 64, 00:19:08.310 "state": "online", 00:19:08.310 "raid_level": "raid5f", 00:19:08.310 "superblock": false, 00:19:08.310 "num_base_bdevs": 4, 00:19:08.310 "num_base_bdevs_discovered": 4, 00:19:08.310 "num_base_bdevs_operational": 4, 00:19:08.310 "process": { 00:19:08.310 "type": "rebuild", 00:19:08.310 "target": "spare", 00:19:08.310 "progress": { 00:19:08.310 "blocks": 65280, 00:19:08.310 "percent": 33 00:19:08.310 } 00:19:08.310 }, 00:19:08.310 "base_bdevs_list": [ 00:19:08.310 { 00:19:08.310 "name": "spare", 00:19:08.310 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:08.310 "is_configured": true, 00:19:08.310 "data_offset": 0, 00:19:08.310 "data_size": 65536 00:19:08.310 }, 00:19:08.310 { 00:19:08.310 "name": "BaseBdev2", 00:19:08.310 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:08.310 "is_configured": true, 00:19:08.310 "data_offset": 0, 00:19:08.310 "data_size": 65536 00:19:08.310 }, 00:19:08.310 { 00:19:08.310 "name": "BaseBdev3", 00:19:08.310 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:08.310 "is_configured": true, 00:19:08.310 "data_offset": 0, 00:19:08.310 "data_size": 65536 00:19:08.310 }, 00:19:08.310 { 00:19:08.310 "name": "BaseBdev4", 00:19:08.310 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:08.310 "is_configured": true, 00:19:08.310 "data_offset": 0, 00:19:08.310 "data_size": 65536 00:19:08.310 } 00:19:08.310 ] 00:19:08.310 }' 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.310 16:27:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.687 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.687 "name": "raid_bdev1", 00:19:09.687 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:09.687 "strip_size_kb": 64, 00:19:09.687 "state": "online", 00:19:09.687 "raid_level": "raid5f", 00:19:09.687 "superblock": false, 00:19:09.687 "num_base_bdevs": 4, 00:19:09.687 "num_base_bdevs_discovered": 4, 00:19:09.687 "num_base_bdevs_operational": 4, 00:19:09.687 "process": { 00:19:09.687 "type": "rebuild", 00:19:09.687 "target": "spare", 00:19:09.687 "progress": { 00:19:09.687 "blocks": 88320, 00:19:09.687 "percent": 44 00:19:09.687 } 00:19:09.687 }, 00:19:09.687 "base_bdevs_list": [ 00:19:09.687 { 00:19:09.687 "name": "spare", 00:19:09.687 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:09.687 "is_configured": true, 00:19:09.687 "data_offset": 0, 00:19:09.688 "data_size": 65536 00:19:09.688 }, 00:19:09.688 { 00:19:09.688 "name": "BaseBdev2", 00:19:09.688 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:09.688 "is_configured": true, 00:19:09.688 "data_offset": 0, 00:19:09.688 "data_size": 65536 00:19:09.688 }, 00:19:09.688 { 00:19:09.688 "name": "BaseBdev3", 00:19:09.688 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:09.688 "is_configured": true, 00:19:09.688 "data_offset": 0, 00:19:09.688 "data_size": 65536 00:19:09.688 }, 00:19:09.688 { 00:19:09.688 "name": "BaseBdev4", 00:19:09.688 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:09.688 "is_configured": true, 00:19:09.688 "data_offset": 0, 00:19:09.688 "data_size": 65536 00:19:09.688 } 00:19:09.688 ] 00:19:09.688 }' 00:19:09.688 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.688 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.688 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.688 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.688 16:27:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.622 "name": "raid_bdev1", 00:19:10.622 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:10.622 "strip_size_kb": 64, 00:19:10.622 "state": "online", 00:19:10.622 "raid_level": "raid5f", 00:19:10.622 "superblock": false, 00:19:10.622 "num_base_bdevs": 4, 00:19:10.622 "num_base_bdevs_discovered": 4, 00:19:10.622 "num_base_bdevs_operational": 4, 00:19:10.622 "process": { 00:19:10.622 "type": "rebuild", 00:19:10.622 "target": "spare", 00:19:10.622 "progress": { 00:19:10.622 "blocks": 109440, 00:19:10.622 "percent": 55 00:19:10.622 } 00:19:10.622 }, 00:19:10.622 "base_bdevs_list": [ 00:19:10.622 { 00:19:10.622 "name": "spare", 00:19:10.622 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:10.622 "is_configured": true, 00:19:10.622 "data_offset": 0, 00:19:10.622 "data_size": 65536 00:19:10.622 }, 00:19:10.622 { 00:19:10.622 "name": "BaseBdev2", 00:19:10.622 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:10.622 "is_configured": true, 00:19:10.622 "data_offset": 0, 00:19:10.622 "data_size": 65536 00:19:10.622 }, 00:19:10.622 { 00:19:10.622 "name": "BaseBdev3", 00:19:10.622 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:10.622 "is_configured": true, 00:19:10.622 "data_offset": 0, 00:19:10.622 "data_size": 65536 00:19:10.622 }, 00:19:10.622 { 00:19:10.622 "name": "BaseBdev4", 00:19:10.622 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:10.622 "is_configured": true, 00:19:10.622 "data_offset": 0, 00:19:10.622 "data_size": 65536 00:19:10.622 } 00:19:10.622 ] 00:19:10.622 }' 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.622 16:27:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.030 "name": "raid_bdev1", 00:19:12.030 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:12.030 "strip_size_kb": 64, 00:19:12.030 "state": "online", 00:19:12.030 "raid_level": "raid5f", 00:19:12.030 "superblock": false, 00:19:12.030 "num_base_bdevs": 4, 00:19:12.030 "num_base_bdevs_discovered": 4, 00:19:12.030 "num_base_bdevs_operational": 4, 00:19:12.030 "process": { 00:19:12.030 "type": "rebuild", 00:19:12.030 "target": "spare", 00:19:12.030 "progress": { 00:19:12.030 "blocks": 132480, 00:19:12.030 "percent": 67 00:19:12.030 } 00:19:12.030 }, 00:19:12.030 "base_bdevs_list": [ 00:19:12.030 { 00:19:12.030 "name": "spare", 00:19:12.030 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:12.030 "is_configured": true, 00:19:12.030 "data_offset": 0, 00:19:12.030 "data_size": 65536 00:19:12.030 }, 00:19:12.030 { 00:19:12.030 "name": "BaseBdev2", 00:19:12.030 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:12.030 "is_configured": true, 00:19:12.030 "data_offset": 0, 00:19:12.030 "data_size": 65536 00:19:12.030 }, 00:19:12.030 { 00:19:12.030 "name": "BaseBdev3", 00:19:12.030 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:12.030 "is_configured": true, 00:19:12.030 "data_offset": 0, 00:19:12.030 "data_size": 65536 00:19:12.030 }, 00:19:12.030 { 00:19:12.030 "name": "BaseBdev4", 00:19:12.030 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:12.030 "is_configured": true, 00:19:12.030 "data_offset": 0, 00:19:12.030 "data_size": 65536 00:19:12.030 } 00:19:12.030 ] 00:19:12.030 }' 00:19:12.030 16:27:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.030 16:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.030 16:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.030 16:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.030 16:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.965 "name": "raid_bdev1", 00:19:12.965 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:12.965 "strip_size_kb": 64, 00:19:12.965 "state": "online", 00:19:12.965 "raid_level": "raid5f", 00:19:12.965 "superblock": false, 00:19:12.965 "num_base_bdevs": 4, 00:19:12.965 "num_base_bdevs_discovered": 4, 00:19:12.965 "num_base_bdevs_operational": 4, 00:19:12.965 "process": { 00:19:12.965 "type": "rebuild", 00:19:12.965 "target": "spare", 00:19:12.965 "progress": { 00:19:12.965 "blocks": 153600, 00:19:12.965 "percent": 78 00:19:12.965 } 00:19:12.965 }, 00:19:12.965 "base_bdevs_list": [ 00:19:12.965 { 00:19:12.965 "name": "spare", 00:19:12.965 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:12.965 "is_configured": true, 00:19:12.965 "data_offset": 0, 00:19:12.965 "data_size": 65536 00:19:12.965 }, 00:19:12.965 { 00:19:12.965 "name": "BaseBdev2", 00:19:12.965 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:12.965 "is_configured": true, 00:19:12.965 "data_offset": 0, 00:19:12.965 "data_size": 65536 00:19:12.965 }, 00:19:12.965 { 00:19:12.965 "name": "BaseBdev3", 00:19:12.965 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:12.965 "is_configured": true, 00:19:12.965 "data_offset": 0, 00:19:12.965 "data_size": 65536 00:19:12.965 }, 00:19:12.965 { 00:19:12.965 "name": "BaseBdev4", 00:19:12.965 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:12.965 "is_configured": true, 00:19:12.965 "data_offset": 0, 00:19:12.965 "data_size": 65536 00:19:12.965 } 00:19:12.965 ] 00:19:12.965 }' 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.965 16:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.341 "name": "raid_bdev1", 00:19:14.341 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:14.341 "strip_size_kb": 64, 00:19:14.341 "state": "online", 00:19:14.341 "raid_level": "raid5f", 00:19:14.341 "superblock": false, 00:19:14.341 "num_base_bdevs": 4, 00:19:14.341 "num_base_bdevs_discovered": 4, 00:19:14.341 "num_base_bdevs_operational": 4, 00:19:14.341 "process": { 00:19:14.341 "type": "rebuild", 00:19:14.341 "target": "spare", 00:19:14.341 "progress": { 00:19:14.341 "blocks": 176640, 00:19:14.341 "percent": 89 00:19:14.341 } 00:19:14.341 }, 00:19:14.341 "base_bdevs_list": [ 00:19:14.341 { 00:19:14.341 "name": "spare", 00:19:14.341 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:14.341 "is_configured": true, 00:19:14.341 "data_offset": 0, 00:19:14.341 "data_size": 65536 00:19:14.341 }, 00:19:14.341 { 00:19:14.341 "name": "BaseBdev2", 00:19:14.341 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:14.341 "is_configured": true, 00:19:14.341 "data_offset": 0, 00:19:14.341 "data_size": 65536 00:19:14.341 }, 00:19:14.341 { 00:19:14.341 "name": "BaseBdev3", 00:19:14.341 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:14.341 "is_configured": true, 00:19:14.341 "data_offset": 0, 00:19:14.341 "data_size": 65536 00:19:14.341 }, 00:19:14.341 { 00:19:14.341 "name": "BaseBdev4", 00:19:14.341 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:14.341 "is_configured": true, 00:19:14.341 "data_offset": 0, 00:19:14.341 "data_size": 65536 00:19:14.341 } 00:19:14.341 ] 00:19:14.341 }' 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.341 16:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.277 [2024-10-08 16:27:08.332779] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:15.277 [2024-10-08 16:27:08.332890] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:15.277 [2024-10-08 16:27:08.332960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.277 "name": "raid_bdev1", 00:19:15.277 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:15.277 "strip_size_kb": 64, 00:19:15.277 "state": "online", 00:19:15.277 "raid_level": "raid5f", 00:19:15.277 "superblock": false, 00:19:15.277 "num_base_bdevs": 4, 00:19:15.277 "num_base_bdevs_discovered": 4, 00:19:15.277 "num_base_bdevs_operational": 4, 00:19:15.277 "base_bdevs_list": [ 00:19:15.277 { 00:19:15.277 "name": "spare", 00:19:15.277 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:15.277 "is_configured": true, 00:19:15.277 "data_offset": 0, 00:19:15.277 "data_size": 65536 00:19:15.277 }, 00:19:15.277 { 00:19:15.277 "name": "BaseBdev2", 00:19:15.277 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:15.277 "is_configured": true, 00:19:15.277 "data_offset": 0, 00:19:15.277 "data_size": 65536 00:19:15.277 }, 00:19:15.277 { 00:19:15.277 "name": "BaseBdev3", 00:19:15.277 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:15.277 "is_configured": true, 00:19:15.277 "data_offset": 0, 00:19:15.277 "data_size": 65536 00:19:15.277 }, 00:19:15.277 { 00:19:15.277 "name": "BaseBdev4", 00:19:15.277 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:15.277 "is_configured": true, 00:19:15.277 "data_offset": 0, 00:19:15.277 "data_size": 65536 00:19:15.277 } 00:19:15.277 ] 00:19:15.277 }' 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.277 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.537 "name": "raid_bdev1", 00:19:15.537 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:15.537 "strip_size_kb": 64, 00:19:15.537 "state": "online", 00:19:15.537 "raid_level": "raid5f", 00:19:15.537 "superblock": false, 00:19:15.537 "num_base_bdevs": 4, 00:19:15.537 "num_base_bdevs_discovered": 4, 00:19:15.537 "num_base_bdevs_operational": 4, 00:19:15.537 "base_bdevs_list": [ 00:19:15.537 { 00:19:15.537 "name": "spare", 00:19:15.537 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 }, 00:19:15.537 { 00:19:15.537 "name": "BaseBdev2", 00:19:15.537 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 }, 00:19:15.537 { 00:19:15.537 "name": "BaseBdev3", 00:19:15.537 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 }, 00:19:15.537 { 00:19:15.537 "name": "BaseBdev4", 00:19:15.537 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 } 00:19:15.537 ] 00:19:15.537 }' 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.537 "name": "raid_bdev1", 00:19:15.537 "uuid": "57910857-9d30-47ff-aa3c-0cf11b3621ff", 00:19:15.537 "strip_size_kb": 64, 00:19:15.537 "state": "online", 00:19:15.537 "raid_level": "raid5f", 00:19:15.537 "superblock": false, 00:19:15.537 "num_base_bdevs": 4, 00:19:15.537 "num_base_bdevs_discovered": 4, 00:19:15.537 "num_base_bdevs_operational": 4, 00:19:15.537 "base_bdevs_list": [ 00:19:15.537 { 00:19:15.537 "name": "spare", 00:19:15.537 "uuid": "56bf4d34-9799-57e6-8f9e-614225475069", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 }, 00:19:15.537 { 00:19:15.537 "name": "BaseBdev2", 00:19:15.537 "uuid": "74eb0c2d-0e7b-5d41-b87b-4464840315e3", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 }, 00:19:15.537 { 00:19:15.537 "name": "BaseBdev3", 00:19:15.537 "uuid": "148f0102-88d3-51c1-a563-e12af5cb405b", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 }, 00:19:15.537 { 00:19:15.537 "name": "BaseBdev4", 00:19:15.537 "uuid": "c2242518-77d9-5f78-9aa4-c7a9999a4b87", 00:19:15.537 "is_configured": true, 00:19:15.537 "data_offset": 0, 00:19:15.537 "data_size": 65536 00:19:15.537 } 00:19:15.537 ] 00:19:15.537 }' 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.537 16:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 [2024-10-08 16:27:09.310827] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.104 [2024-10-08 16:27:09.311109] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.104 [2024-10-08 16:27:09.311320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.104 [2024-10-08 16:27:09.311608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.104 [2024-10-08 16:27:09.311637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.104 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:16.671 /dev/nbd0 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.671 1+0 records in 00:19:16.671 1+0 records out 00:19:16.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374467 s, 10.9 MB/s 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.671 16:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:16.930 /dev/nbd1 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.930 1+0 records in 00:19:16.930 1+0 records out 00:19:16.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395895 s, 10.3 MB/s 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.930 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.188 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.446 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85391 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85391 ']' 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85391 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85391 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85391' 00:19:17.705 killing process with pid 85391 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 85391 00:19:17.705 Received shutdown signal, test time was about 60.000000 seconds 00:19:17.705 00:19:17.705 Latency(us) 00:19:17.705 [2024-10-08T16:27:11.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.705 [2024-10-08T16:27:11.027Z] =================================================================================================================== 00:19:17.705 [2024-10-08T16:27:11.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.705 16:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 85391 00:19:17.705 [2024-10-08 16:27:10.853176] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.963 [2024-10-08 16:27:11.276757] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:19.339 00:19:19.339 real 0m20.216s 00:19:19.339 user 0m25.092s 00:19:19.339 sys 0m2.304s 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.339 ************************************ 00:19:19.339 END TEST raid5f_rebuild_test 00:19:19.339 ************************************ 00:19:19.339 16:27:12 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:19.339 16:27:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:19.339 16:27:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:19.339 16:27:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.339 ************************************ 00:19:19.339 START TEST raid5f_rebuild_test_sb 00:19:19.339 ************************************ 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:19.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85899 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85899 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85899 ']' 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.339 16:27:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.339 [2024-10-08 16:27:12.636023] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:19.339 [2024-10-08 16:27:12.636442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85899 ] 00:19:19.339 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:19.339 Zero copy mechanism will not be used. 00:19:19.598 [2024-10-08 16:27:12.811427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.856 [2024-10-08 16:27:13.054114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.114 [2024-10-08 16:27:13.261359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.114 [2024-10-08 16:27:13.261449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.373 BaseBdev1_malloc 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.373 [2024-10-08 16:27:13.683671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:20.373 [2024-10-08 16:27:13.683763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.373 [2024-10-08 16:27:13.683795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:20.373 [2024-10-08 16:27:13.683817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.373 [2024-10-08 16:27:13.686542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.373 [2024-10-08 16:27:13.686603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.373 BaseBdev1 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.631 BaseBdev2_malloc 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.631 [2024-10-08 16:27:13.745000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:20.631 [2024-10-08 16:27:13.745341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.631 [2024-10-08 16:27:13.745379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:20.631 [2024-10-08 16:27:13.745398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.631 [2024-10-08 16:27:13.748116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.631 [2024-10-08 16:27:13.748162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.631 BaseBdev2 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.631 BaseBdev3_malloc 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.631 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 [2024-10-08 16:27:13.795392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:20.632 [2024-10-08 16:27:13.795459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.632 [2024-10-08 16:27:13.795489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:20.632 [2024-10-08 16:27:13.795508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.632 [2024-10-08 16:27:13.798229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.632 [2024-10-08 16:27:13.798280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:20.632 BaseBdev3 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 BaseBdev4_malloc 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 [2024-10-08 16:27:13.844067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:20.632 [2024-10-08 16:27:13.844716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.632 [2024-10-08 16:27:13.844860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:20.632 [2024-10-08 16:27:13.844960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.632 [2024-10-08 16:27:13.847845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.632 [2024-10-08 16:27:13.848114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:20.632 BaseBdev4 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 spare_malloc 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 spare_delay 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 [2024-10-08 16:27:13.908602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.632 [2024-10-08 16:27:13.909033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.632 [2024-10-08 16:27:13.909155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:20.632 [2024-10-08 16:27:13.909254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.632 [2024-10-08 16:27:13.912068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.632 [2024-10-08 16:27:13.912304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.632 spare 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 [2024-10-08 16:27:13.916787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.632 [2024-10-08 16:27:13.919197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.632 [2024-10-08 16:27:13.919279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.632 [2024-10-08 16:27:13.919360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.632 [2024-10-08 16:27:13.919767] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:20.632 [2024-10-08 16:27:13.919917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:20.632 [2024-10-08 16:27:13.920288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:20.632 [2024-10-08 16:27:13.927152] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:20.632 [2024-10-08 16:27:13.927319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:20.632 [2024-10-08 16:27:13.927738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.632 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.890 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.890 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.890 "name": "raid_bdev1", 00:19:20.890 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:20.890 "strip_size_kb": 64, 00:19:20.890 "state": "online", 00:19:20.890 "raid_level": "raid5f", 00:19:20.890 "superblock": true, 00:19:20.890 "num_base_bdevs": 4, 00:19:20.890 "num_base_bdevs_discovered": 4, 00:19:20.890 "num_base_bdevs_operational": 4, 00:19:20.890 "base_bdevs_list": [ 00:19:20.890 { 00:19:20.890 "name": "BaseBdev1", 00:19:20.890 "uuid": "3fa4a6af-aadb-55cc-a088-643897614604", 00:19:20.890 "is_configured": true, 00:19:20.890 "data_offset": 2048, 00:19:20.890 "data_size": 63488 00:19:20.890 }, 00:19:20.890 { 00:19:20.890 "name": "BaseBdev2", 00:19:20.890 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:20.890 "is_configured": true, 00:19:20.890 "data_offset": 2048, 00:19:20.890 "data_size": 63488 00:19:20.890 }, 00:19:20.890 { 00:19:20.890 "name": "BaseBdev3", 00:19:20.890 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:20.890 "is_configured": true, 00:19:20.890 "data_offset": 2048, 00:19:20.890 "data_size": 63488 00:19:20.890 }, 00:19:20.890 { 00:19:20.890 "name": "BaseBdev4", 00:19:20.890 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:20.890 "is_configured": true, 00:19:20.890 "data_offset": 2048, 00:19:20.890 "data_size": 63488 00:19:20.890 } 00:19:20.890 ] 00:19:20.890 }' 00:19:20.890 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.890 16:27:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.148 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.148 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:21.148 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.148 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.148 [2024-10-08 16:27:14.447402] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.148 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:21.407 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:21.666 [2024-10-08 16:27:14.843257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:21.666 /dev/nbd0 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.666 1+0 records in 00:19:21.666 1+0 records out 00:19:21.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416731 s, 9.8 MB/s 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:21.666 16:27:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:22.232 496+0 records in 00:19:22.232 496+0 records out 00:19:22.232 97517568 bytes (98 MB, 93 MiB) copied, 0.59588 s, 164 MB/s 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:22.232 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:22.561 [2024-10-08 16:27:15.768633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.561 [2024-10-08 16:27:15.803645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.561 "name": "raid_bdev1", 00:19:22.561 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:22.561 "strip_size_kb": 64, 00:19:22.561 "state": "online", 00:19:22.561 "raid_level": "raid5f", 00:19:22.561 "superblock": true, 00:19:22.561 "num_base_bdevs": 4, 00:19:22.561 "num_base_bdevs_discovered": 3, 00:19:22.561 "num_base_bdevs_operational": 3, 00:19:22.561 "base_bdevs_list": [ 00:19:22.561 { 00:19:22.561 "name": null, 00:19:22.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.561 "is_configured": false, 00:19:22.561 "data_offset": 0, 00:19:22.561 "data_size": 63488 00:19:22.561 }, 00:19:22.561 { 00:19:22.561 "name": "BaseBdev2", 00:19:22.561 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:22.561 "is_configured": true, 00:19:22.561 "data_offset": 2048, 00:19:22.561 "data_size": 63488 00:19:22.561 }, 00:19:22.561 { 00:19:22.561 "name": "BaseBdev3", 00:19:22.561 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:22.561 "is_configured": true, 00:19:22.561 "data_offset": 2048, 00:19:22.561 "data_size": 63488 00:19:22.561 }, 00:19:22.561 { 00:19:22.561 "name": "BaseBdev4", 00:19:22.561 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:22.561 "is_configured": true, 00:19:22.561 "data_offset": 2048, 00:19:22.561 "data_size": 63488 00:19:22.561 } 00:19:22.561 ] 00:19:22.561 }' 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.561 16:27:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.128 16:27:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.128 16:27:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.128 16:27:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.128 [2024-10-08 16:27:16.319823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.128 [2024-10-08 16:27:16.333408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:23.128 16:27:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.128 16:27:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:23.128 [2024-10-08 16:27:16.342384] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.061 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.319 "name": "raid_bdev1", 00:19:24.319 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:24.319 "strip_size_kb": 64, 00:19:24.319 "state": "online", 00:19:24.319 "raid_level": "raid5f", 00:19:24.319 "superblock": true, 00:19:24.319 "num_base_bdevs": 4, 00:19:24.319 "num_base_bdevs_discovered": 4, 00:19:24.319 "num_base_bdevs_operational": 4, 00:19:24.319 "process": { 00:19:24.319 "type": "rebuild", 00:19:24.319 "target": "spare", 00:19:24.319 "progress": { 00:19:24.319 "blocks": 17280, 00:19:24.319 "percent": 9 00:19:24.319 } 00:19:24.319 }, 00:19:24.319 "base_bdevs_list": [ 00:19:24.319 { 00:19:24.319 "name": "spare", 00:19:24.319 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:24.319 "is_configured": true, 00:19:24.319 "data_offset": 2048, 00:19:24.319 "data_size": 63488 00:19:24.319 }, 00:19:24.319 { 00:19:24.319 "name": "BaseBdev2", 00:19:24.319 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:24.319 "is_configured": true, 00:19:24.319 "data_offset": 2048, 00:19:24.319 "data_size": 63488 00:19:24.319 }, 00:19:24.319 { 00:19:24.319 "name": "BaseBdev3", 00:19:24.319 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:24.319 "is_configured": true, 00:19:24.319 "data_offset": 2048, 00:19:24.319 "data_size": 63488 00:19:24.319 }, 00:19:24.319 { 00:19:24.319 "name": "BaseBdev4", 00:19:24.319 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:24.319 "is_configured": true, 00:19:24.319 "data_offset": 2048, 00:19:24.319 "data_size": 63488 00:19:24.319 } 00:19:24.319 ] 00:19:24.319 }' 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.319 [2024-10-08 16:27:17.503711] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.319 [2024-10-08 16:27:17.554938] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.319 [2024-10-08 16:27:17.555048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.319 [2024-10-08 16:27:17.555077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.319 [2024-10-08 16:27:17.555100] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.319 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.320 "name": "raid_bdev1", 00:19:24.320 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:24.320 "strip_size_kb": 64, 00:19:24.320 "state": "online", 00:19:24.320 "raid_level": "raid5f", 00:19:24.320 "superblock": true, 00:19:24.320 "num_base_bdevs": 4, 00:19:24.320 "num_base_bdevs_discovered": 3, 00:19:24.320 "num_base_bdevs_operational": 3, 00:19:24.320 "base_bdevs_list": [ 00:19:24.320 { 00:19:24.320 "name": null, 00:19:24.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.320 "is_configured": false, 00:19:24.320 "data_offset": 0, 00:19:24.320 "data_size": 63488 00:19:24.320 }, 00:19:24.320 { 00:19:24.320 "name": "BaseBdev2", 00:19:24.320 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:24.320 "is_configured": true, 00:19:24.320 "data_offset": 2048, 00:19:24.320 "data_size": 63488 00:19:24.320 }, 00:19:24.320 { 00:19:24.320 "name": "BaseBdev3", 00:19:24.320 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:24.320 "is_configured": true, 00:19:24.320 "data_offset": 2048, 00:19:24.320 "data_size": 63488 00:19:24.320 }, 00:19:24.320 { 00:19:24.320 "name": "BaseBdev4", 00:19:24.320 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:24.320 "is_configured": true, 00:19:24.320 "data_offset": 2048, 00:19:24.320 "data_size": 63488 00:19:24.320 } 00:19:24.320 ] 00:19:24.320 }' 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.320 16:27:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.885 "name": "raid_bdev1", 00:19:24.885 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:24.885 "strip_size_kb": 64, 00:19:24.885 "state": "online", 00:19:24.885 "raid_level": "raid5f", 00:19:24.885 "superblock": true, 00:19:24.885 "num_base_bdevs": 4, 00:19:24.885 "num_base_bdevs_discovered": 3, 00:19:24.885 "num_base_bdevs_operational": 3, 00:19:24.885 "base_bdevs_list": [ 00:19:24.885 { 00:19:24.885 "name": null, 00:19:24.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.885 "is_configured": false, 00:19:24.885 "data_offset": 0, 00:19:24.885 "data_size": 63488 00:19:24.885 }, 00:19:24.885 { 00:19:24.885 "name": "BaseBdev2", 00:19:24.885 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:24.885 "is_configured": true, 00:19:24.885 "data_offset": 2048, 00:19:24.885 "data_size": 63488 00:19:24.885 }, 00:19:24.885 { 00:19:24.885 "name": "BaseBdev3", 00:19:24.885 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:24.885 "is_configured": true, 00:19:24.885 "data_offset": 2048, 00:19:24.885 "data_size": 63488 00:19:24.885 }, 00:19:24.885 { 00:19:24.885 "name": "BaseBdev4", 00:19:24.885 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:24.885 "is_configured": true, 00:19:24.885 "data_offset": 2048, 00:19:24.885 "data_size": 63488 00:19:24.885 } 00:19:24.885 ] 00:19:24.885 }' 00:19:24.885 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.145 [2024-10-08 16:27:18.268559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.145 [2024-10-08 16:27:18.281641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.145 16:27:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:25.145 [2024-10-08 16:27:18.290571] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.084 "name": "raid_bdev1", 00:19:26.084 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:26.084 "strip_size_kb": 64, 00:19:26.084 "state": "online", 00:19:26.084 "raid_level": "raid5f", 00:19:26.084 "superblock": true, 00:19:26.084 "num_base_bdevs": 4, 00:19:26.084 "num_base_bdevs_discovered": 4, 00:19:26.084 "num_base_bdevs_operational": 4, 00:19:26.084 "process": { 00:19:26.084 "type": "rebuild", 00:19:26.084 "target": "spare", 00:19:26.084 "progress": { 00:19:26.084 "blocks": 17280, 00:19:26.084 "percent": 9 00:19:26.084 } 00:19:26.084 }, 00:19:26.084 "base_bdevs_list": [ 00:19:26.084 { 00:19:26.084 "name": "spare", 00:19:26.084 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:26.084 "is_configured": true, 00:19:26.084 "data_offset": 2048, 00:19:26.084 "data_size": 63488 00:19:26.084 }, 00:19:26.084 { 00:19:26.084 "name": "BaseBdev2", 00:19:26.084 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:26.084 "is_configured": true, 00:19:26.084 "data_offset": 2048, 00:19:26.084 "data_size": 63488 00:19:26.084 }, 00:19:26.084 { 00:19:26.084 "name": "BaseBdev3", 00:19:26.084 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:26.084 "is_configured": true, 00:19:26.084 "data_offset": 2048, 00:19:26.084 "data_size": 63488 00:19:26.084 }, 00:19:26.084 { 00:19:26.084 "name": "BaseBdev4", 00:19:26.084 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:26.084 "is_configured": true, 00:19:26.084 "data_offset": 2048, 00:19:26.084 "data_size": 63488 00:19:26.084 } 00:19:26.084 ] 00:19:26.084 }' 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.084 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:26.342 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=709 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.342 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.342 "name": "raid_bdev1", 00:19:26.342 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:26.342 "strip_size_kb": 64, 00:19:26.342 "state": "online", 00:19:26.342 "raid_level": "raid5f", 00:19:26.342 "superblock": true, 00:19:26.342 "num_base_bdevs": 4, 00:19:26.342 "num_base_bdevs_discovered": 4, 00:19:26.342 "num_base_bdevs_operational": 4, 00:19:26.342 "process": { 00:19:26.342 "type": "rebuild", 00:19:26.342 "target": "spare", 00:19:26.342 "progress": { 00:19:26.342 "blocks": 21120, 00:19:26.342 "percent": 11 00:19:26.342 } 00:19:26.342 }, 00:19:26.342 "base_bdevs_list": [ 00:19:26.342 { 00:19:26.342 "name": "spare", 00:19:26.342 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:26.342 "is_configured": true, 00:19:26.342 "data_offset": 2048, 00:19:26.342 "data_size": 63488 00:19:26.342 }, 00:19:26.342 { 00:19:26.342 "name": "BaseBdev2", 00:19:26.343 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:26.343 "is_configured": true, 00:19:26.343 "data_offset": 2048, 00:19:26.343 "data_size": 63488 00:19:26.343 }, 00:19:26.343 { 00:19:26.343 "name": "BaseBdev3", 00:19:26.343 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:26.343 "is_configured": true, 00:19:26.343 "data_offset": 2048, 00:19:26.343 "data_size": 63488 00:19:26.343 }, 00:19:26.343 { 00:19:26.343 "name": "BaseBdev4", 00:19:26.343 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:26.343 "is_configured": true, 00:19:26.343 "data_offset": 2048, 00:19:26.343 "data_size": 63488 00:19:26.343 } 00:19:26.343 ] 00:19:26.343 }' 00:19:26.343 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.343 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.343 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.343 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.343 16:27:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.717 "name": "raid_bdev1", 00:19:27.717 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:27.717 "strip_size_kb": 64, 00:19:27.717 "state": "online", 00:19:27.717 "raid_level": "raid5f", 00:19:27.717 "superblock": true, 00:19:27.717 "num_base_bdevs": 4, 00:19:27.717 "num_base_bdevs_discovered": 4, 00:19:27.717 "num_base_bdevs_operational": 4, 00:19:27.717 "process": { 00:19:27.717 "type": "rebuild", 00:19:27.717 "target": "spare", 00:19:27.717 "progress": { 00:19:27.717 "blocks": 44160, 00:19:27.717 "percent": 23 00:19:27.717 } 00:19:27.717 }, 00:19:27.717 "base_bdevs_list": [ 00:19:27.717 { 00:19:27.717 "name": "spare", 00:19:27.717 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:27.717 "is_configured": true, 00:19:27.717 "data_offset": 2048, 00:19:27.717 "data_size": 63488 00:19:27.717 }, 00:19:27.717 { 00:19:27.717 "name": "BaseBdev2", 00:19:27.717 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:27.717 "is_configured": true, 00:19:27.717 "data_offset": 2048, 00:19:27.717 "data_size": 63488 00:19:27.717 }, 00:19:27.717 { 00:19:27.717 "name": "BaseBdev3", 00:19:27.717 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:27.717 "is_configured": true, 00:19:27.717 "data_offset": 2048, 00:19:27.717 "data_size": 63488 00:19:27.717 }, 00:19:27.717 { 00:19:27.717 "name": "BaseBdev4", 00:19:27.717 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:27.717 "is_configured": true, 00:19:27.717 "data_offset": 2048, 00:19:27.717 "data_size": 63488 00:19:27.717 } 00:19:27.717 ] 00:19:27.717 }' 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.717 16:27:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:28.651 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:28.651 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.651 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.651 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.652 "name": "raid_bdev1", 00:19:28.652 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:28.652 "strip_size_kb": 64, 00:19:28.652 "state": "online", 00:19:28.652 "raid_level": "raid5f", 00:19:28.652 "superblock": true, 00:19:28.652 "num_base_bdevs": 4, 00:19:28.652 "num_base_bdevs_discovered": 4, 00:19:28.652 "num_base_bdevs_operational": 4, 00:19:28.652 "process": { 00:19:28.652 "type": "rebuild", 00:19:28.652 "target": "spare", 00:19:28.652 "progress": { 00:19:28.652 "blocks": 65280, 00:19:28.652 "percent": 34 00:19:28.652 } 00:19:28.652 }, 00:19:28.652 "base_bdevs_list": [ 00:19:28.652 { 00:19:28.652 "name": "spare", 00:19:28.652 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:28.652 "is_configured": true, 00:19:28.652 "data_offset": 2048, 00:19:28.652 "data_size": 63488 00:19:28.652 }, 00:19:28.652 { 00:19:28.652 "name": "BaseBdev2", 00:19:28.652 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:28.652 "is_configured": true, 00:19:28.652 "data_offset": 2048, 00:19:28.652 "data_size": 63488 00:19:28.652 }, 00:19:28.652 { 00:19:28.652 "name": "BaseBdev3", 00:19:28.652 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:28.652 "is_configured": true, 00:19:28.652 "data_offset": 2048, 00:19:28.652 "data_size": 63488 00:19:28.652 }, 00:19:28.652 { 00:19:28.652 "name": "BaseBdev4", 00:19:28.652 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:28.652 "is_configured": true, 00:19:28.652 "data_offset": 2048, 00:19:28.652 "data_size": 63488 00:19:28.652 } 00:19:28.652 ] 00:19:28.652 }' 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.652 16:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.026 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.026 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.026 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.026 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.026 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.026 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.027 "name": "raid_bdev1", 00:19:30.027 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:30.027 "strip_size_kb": 64, 00:19:30.027 "state": "online", 00:19:30.027 "raid_level": "raid5f", 00:19:30.027 "superblock": true, 00:19:30.027 "num_base_bdevs": 4, 00:19:30.027 "num_base_bdevs_discovered": 4, 00:19:30.027 "num_base_bdevs_operational": 4, 00:19:30.027 "process": { 00:19:30.027 "type": "rebuild", 00:19:30.027 "target": "spare", 00:19:30.027 "progress": { 00:19:30.027 "blocks": 88320, 00:19:30.027 "percent": 46 00:19:30.027 } 00:19:30.027 }, 00:19:30.027 "base_bdevs_list": [ 00:19:30.027 { 00:19:30.027 "name": "spare", 00:19:30.027 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:30.027 "is_configured": true, 00:19:30.027 "data_offset": 2048, 00:19:30.027 "data_size": 63488 00:19:30.027 }, 00:19:30.027 { 00:19:30.027 "name": "BaseBdev2", 00:19:30.027 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:30.027 "is_configured": true, 00:19:30.027 "data_offset": 2048, 00:19:30.027 "data_size": 63488 00:19:30.027 }, 00:19:30.027 { 00:19:30.027 "name": "BaseBdev3", 00:19:30.027 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:30.027 "is_configured": true, 00:19:30.027 "data_offset": 2048, 00:19:30.027 "data_size": 63488 00:19:30.027 }, 00:19:30.027 { 00:19:30.027 "name": "BaseBdev4", 00:19:30.027 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:30.027 "is_configured": true, 00:19:30.027 "data_offset": 2048, 00:19:30.027 "data_size": 63488 00:19:30.027 } 00:19:30.027 ] 00:19:30.027 }' 00:19:30.027 16:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.027 16:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.027 16:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.027 16:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.027 16:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.960 "name": "raid_bdev1", 00:19:30.960 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:30.960 "strip_size_kb": 64, 00:19:30.960 "state": "online", 00:19:30.960 "raid_level": "raid5f", 00:19:30.960 "superblock": true, 00:19:30.960 "num_base_bdevs": 4, 00:19:30.960 "num_base_bdevs_discovered": 4, 00:19:30.960 "num_base_bdevs_operational": 4, 00:19:30.960 "process": { 00:19:30.960 "type": "rebuild", 00:19:30.960 "target": "spare", 00:19:30.960 "progress": { 00:19:30.960 "blocks": 109440, 00:19:30.960 "percent": 57 00:19:30.960 } 00:19:30.960 }, 00:19:30.960 "base_bdevs_list": [ 00:19:30.960 { 00:19:30.960 "name": "spare", 00:19:30.960 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:30.960 "is_configured": true, 00:19:30.960 "data_offset": 2048, 00:19:30.960 "data_size": 63488 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "name": "BaseBdev2", 00:19:30.960 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:30.960 "is_configured": true, 00:19:30.960 "data_offset": 2048, 00:19:30.960 "data_size": 63488 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "name": "BaseBdev3", 00:19:30.960 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:30.960 "is_configured": true, 00:19:30.960 "data_offset": 2048, 00:19:30.960 "data_size": 63488 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "name": "BaseBdev4", 00:19:30.960 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:30.960 "is_configured": true, 00:19:30.960 "data_offset": 2048, 00:19:30.960 "data_size": 63488 00:19:30.960 } 00:19:30.960 ] 00:19:30.960 }' 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.960 16:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.333 "name": "raid_bdev1", 00:19:32.333 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:32.333 "strip_size_kb": 64, 00:19:32.333 "state": "online", 00:19:32.333 "raid_level": "raid5f", 00:19:32.333 "superblock": true, 00:19:32.333 "num_base_bdevs": 4, 00:19:32.333 "num_base_bdevs_discovered": 4, 00:19:32.333 "num_base_bdevs_operational": 4, 00:19:32.333 "process": { 00:19:32.333 "type": "rebuild", 00:19:32.333 "target": "spare", 00:19:32.333 "progress": { 00:19:32.333 "blocks": 132480, 00:19:32.333 "percent": 69 00:19:32.333 } 00:19:32.333 }, 00:19:32.333 "base_bdevs_list": [ 00:19:32.333 { 00:19:32.333 "name": "spare", 00:19:32.333 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:32.333 "is_configured": true, 00:19:32.333 "data_offset": 2048, 00:19:32.333 "data_size": 63488 00:19:32.333 }, 00:19:32.333 { 00:19:32.333 "name": "BaseBdev2", 00:19:32.333 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:32.333 "is_configured": true, 00:19:32.333 "data_offset": 2048, 00:19:32.333 "data_size": 63488 00:19:32.333 }, 00:19:32.333 { 00:19:32.333 "name": "BaseBdev3", 00:19:32.333 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:32.333 "is_configured": true, 00:19:32.333 "data_offset": 2048, 00:19:32.333 "data_size": 63488 00:19:32.333 }, 00:19:32.333 { 00:19:32.333 "name": "BaseBdev4", 00:19:32.333 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:32.333 "is_configured": true, 00:19:32.333 "data_offset": 2048, 00:19:32.333 "data_size": 63488 00:19:32.333 } 00:19:32.333 ] 00:19:32.333 }' 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.333 16:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.280 "name": "raid_bdev1", 00:19:33.280 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:33.280 "strip_size_kb": 64, 00:19:33.280 "state": "online", 00:19:33.280 "raid_level": "raid5f", 00:19:33.280 "superblock": true, 00:19:33.280 "num_base_bdevs": 4, 00:19:33.280 "num_base_bdevs_discovered": 4, 00:19:33.280 "num_base_bdevs_operational": 4, 00:19:33.280 "process": { 00:19:33.280 "type": "rebuild", 00:19:33.280 "target": "spare", 00:19:33.280 "progress": { 00:19:33.280 "blocks": 153600, 00:19:33.280 "percent": 80 00:19:33.280 } 00:19:33.280 }, 00:19:33.280 "base_bdevs_list": [ 00:19:33.280 { 00:19:33.280 "name": "spare", 00:19:33.280 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:33.280 "is_configured": true, 00:19:33.280 "data_offset": 2048, 00:19:33.280 "data_size": 63488 00:19:33.280 }, 00:19:33.280 { 00:19:33.280 "name": "BaseBdev2", 00:19:33.280 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:33.280 "is_configured": true, 00:19:33.280 "data_offset": 2048, 00:19:33.280 "data_size": 63488 00:19:33.280 }, 00:19:33.280 { 00:19:33.280 "name": "BaseBdev3", 00:19:33.280 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:33.280 "is_configured": true, 00:19:33.280 "data_offset": 2048, 00:19:33.280 "data_size": 63488 00:19:33.280 }, 00:19:33.280 { 00:19:33.280 "name": "BaseBdev4", 00:19:33.280 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:33.280 "is_configured": true, 00:19:33.280 "data_offset": 2048, 00:19:33.280 "data_size": 63488 00:19:33.280 } 00:19:33.280 ] 00:19:33.280 }' 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.280 16:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.657 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.657 "name": "raid_bdev1", 00:19:34.657 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:34.657 "strip_size_kb": 64, 00:19:34.657 "state": "online", 00:19:34.657 "raid_level": "raid5f", 00:19:34.657 "superblock": true, 00:19:34.657 "num_base_bdevs": 4, 00:19:34.657 "num_base_bdevs_discovered": 4, 00:19:34.657 "num_base_bdevs_operational": 4, 00:19:34.657 "process": { 00:19:34.657 "type": "rebuild", 00:19:34.657 "target": "spare", 00:19:34.657 "progress": { 00:19:34.657 "blocks": 176640, 00:19:34.657 "percent": 92 00:19:34.657 } 00:19:34.657 }, 00:19:34.657 "base_bdevs_list": [ 00:19:34.657 { 00:19:34.657 "name": "spare", 00:19:34.657 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:34.657 "is_configured": true, 00:19:34.657 "data_offset": 2048, 00:19:34.657 "data_size": 63488 00:19:34.657 }, 00:19:34.657 { 00:19:34.657 "name": "BaseBdev2", 00:19:34.657 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:34.657 "is_configured": true, 00:19:34.657 "data_offset": 2048, 00:19:34.657 "data_size": 63488 00:19:34.657 }, 00:19:34.657 { 00:19:34.657 "name": "BaseBdev3", 00:19:34.657 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:34.657 "is_configured": true, 00:19:34.657 "data_offset": 2048, 00:19:34.657 "data_size": 63488 00:19:34.657 }, 00:19:34.657 { 00:19:34.657 "name": "BaseBdev4", 00:19:34.657 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:34.657 "is_configured": true, 00:19:34.657 "data_offset": 2048, 00:19:34.658 "data_size": 63488 00:19:34.658 } 00:19:34.658 ] 00:19:34.658 }' 00:19:34.658 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.658 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.658 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.658 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.658 16:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:35.228 [2024-10-08 16:27:28.390382] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:35.228 [2024-10-08 16:27:28.390519] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:35.228 [2024-10-08 16:27:28.390721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.486 "name": "raid_bdev1", 00:19:35.486 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:35.486 "strip_size_kb": 64, 00:19:35.486 "state": "online", 00:19:35.486 "raid_level": "raid5f", 00:19:35.486 "superblock": true, 00:19:35.486 "num_base_bdevs": 4, 00:19:35.486 "num_base_bdevs_discovered": 4, 00:19:35.486 "num_base_bdevs_operational": 4, 00:19:35.486 "base_bdevs_list": [ 00:19:35.486 { 00:19:35.486 "name": "spare", 00:19:35.486 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:35.486 "is_configured": true, 00:19:35.486 "data_offset": 2048, 00:19:35.486 "data_size": 63488 00:19:35.486 }, 00:19:35.486 { 00:19:35.486 "name": "BaseBdev2", 00:19:35.486 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:35.486 "is_configured": true, 00:19:35.486 "data_offset": 2048, 00:19:35.486 "data_size": 63488 00:19:35.486 }, 00:19:35.486 { 00:19:35.486 "name": "BaseBdev3", 00:19:35.486 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:35.486 "is_configured": true, 00:19:35.486 "data_offset": 2048, 00:19:35.486 "data_size": 63488 00:19:35.486 }, 00:19:35.486 { 00:19:35.486 "name": "BaseBdev4", 00:19:35.486 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:35.486 "is_configured": true, 00:19:35.486 "data_offset": 2048, 00:19:35.486 "data_size": 63488 00:19:35.486 } 00:19:35.486 ] 00:19:35.486 }' 00:19:35.486 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.744 "name": "raid_bdev1", 00:19:35.744 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:35.744 "strip_size_kb": 64, 00:19:35.744 "state": "online", 00:19:35.744 "raid_level": "raid5f", 00:19:35.744 "superblock": true, 00:19:35.744 "num_base_bdevs": 4, 00:19:35.744 "num_base_bdevs_discovered": 4, 00:19:35.744 "num_base_bdevs_operational": 4, 00:19:35.744 "base_bdevs_list": [ 00:19:35.744 { 00:19:35.744 "name": "spare", 00:19:35.744 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:35.744 "is_configured": true, 00:19:35.744 "data_offset": 2048, 00:19:35.744 "data_size": 63488 00:19:35.744 }, 00:19:35.744 { 00:19:35.744 "name": "BaseBdev2", 00:19:35.744 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:35.744 "is_configured": true, 00:19:35.744 "data_offset": 2048, 00:19:35.744 "data_size": 63488 00:19:35.744 }, 00:19:35.744 { 00:19:35.744 "name": "BaseBdev3", 00:19:35.744 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:35.744 "is_configured": true, 00:19:35.744 "data_offset": 2048, 00:19:35.744 "data_size": 63488 00:19:35.744 }, 00:19:35.744 { 00:19:35.744 "name": "BaseBdev4", 00:19:35.744 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:35.744 "is_configured": true, 00:19:35.744 "data_offset": 2048, 00:19:35.744 "data_size": 63488 00:19:35.744 } 00:19:35.744 ] 00:19:35.744 }' 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.744 16:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.744 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.002 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.002 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.002 "name": "raid_bdev1", 00:19:36.002 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:36.002 "strip_size_kb": 64, 00:19:36.002 "state": "online", 00:19:36.002 "raid_level": "raid5f", 00:19:36.002 "superblock": true, 00:19:36.002 "num_base_bdevs": 4, 00:19:36.002 "num_base_bdevs_discovered": 4, 00:19:36.002 "num_base_bdevs_operational": 4, 00:19:36.002 "base_bdevs_list": [ 00:19:36.002 { 00:19:36.002 "name": "spare", 00:19:36.002 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:36.002 "is_configured": true, 00:19:36.002 "data_offset": 2048, 00:19:36.002 "data_size": 63488 00:19:36.002 }, 00:19:36.002 { 00:19:36.002 "name": "BaseBdev2", 00:19:36.002 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:36.002 "is_configured": true, 00:19:36.002 "data_offset": 2048, 00:19:36.002 "data_size": 63488 00:19:36.002 }, 00:19:36.002 { 00:19:36.002 "name": "BaseBdev3", 00:19:36.002 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:36.002 "is_configured": true, 00:19:36.002 "data_offset": 2048, 00:19:36.002 "data_size": 63488 00:19:36.002 }, 00:19:36.002 { 00:19:36.002 "name": "BaseBdev4", 00:19:36.002 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:36.002 "is_configured": true, 00:19:36.002 "data_offset": 2048, 00:19:36.002 "data_size": 63488 00:19:36.002 } 00:19:36.002 ] 00:19:36.002 }' 00:19:36.002 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.002 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.260 [2024-10-08 16:27:29.556456] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.260 [2024-10-08 16:27:29.556504] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.260 [2024-10-08 16:27:29.556617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.260 [2024-10-08 16:27:29.556742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.260 [2024-10-08 16:27:29.556761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.260 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.517 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:36.518 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:36.775 /dev/nbd0 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.775 1+0 records in 00:19:36.775 1+0 records out 00:19:36.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030627 s, 13.4 MB/s 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:36.775 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:36.776 16:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:37.047 /dev/nbd1 00:19:37.047 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:37.048 1+0 records in 00:19:37.048 1+0 records out 00:19:37.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029901 s, 13.7 MB/s 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:37.048 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.316 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.585 16:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.843 [2024-10-08 16:27:31.105504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:37.843 [2024-10-08 16:27:31.105590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.843 [2024-10-08 16:27:31.105625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:37.843 [2024-10-08 16:27:31.105641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.843 [2024-10-08 16:27:31.108480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.843 [2024-10-08 16:27:31.108534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:37.843 [2024-10-08 16:27:31.108649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:37.843 [2024-10-08 16:27:31.108725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.843 [2024-10-08 16:27:31.108907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.843 [2024-10-08 16:27:31.109050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:37.843 [2024-10-08 16:27:31.109159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:37.843 spare 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.843 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.102 [2024-10-08 16:27:31.209307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:38.102 [2024-10-08 16:27:31.209387] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.102 [2024-10-08 16:27:31.209842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:38.102 [2024-10-08 16:27:31.216196] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:38.102 [2024-10-08 16:27:31.216227] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:38.102 [2024-10-08 16:27:31.216492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.102 "name": "raid_bdev1", 00:19:38.102 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:38.102 "strip_size_kb": 64, 00:19:38.102 "state": "online", 00:19:38.102 "raid_level": "raid5f", 00:19:38.102 "superblock": true, 00:19:38.102 "num_base_bdevs": 4, 00:19:38.102 "num_base_bdevs_discovered": 4, 00:19:38.102 "num_base_bdevs_operational": 4, 00:19:38.102 "base_bdevs_list": [ 00:19:38.102 { 00:19:38.102 "name": "spare", 00:19:38.102 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:38.102 "is_configured": true, 00:19:38.102 "data_offset": 2048, 00:19:38.102 "data_size": 63488 00:19:38.102 }, 00:19:38.102 { 00:19:38.102 "name": "BaseBdev2", 00:19:38.102 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:38.102 "is_configured": true, 00:19:38.102 "data_offset": 2048, 00:19:38.102 "data_size": 63488 00:19:38.102 }, 00:19:38.102 { 00:19:38.102 "name": "BaseBdev3", 00:19:38.102 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:38.102 "is_configured": true, 00:19:38.102 "data_offset": 2048, 00:19:38.102 "data_size": 63488 00:19:38.102 }, 00:19:38.102 { 00:19:38.102 "name": "BaseBdev4", 00:19:38.102 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:38.102 "is_configured": true, 00:19:38.102 "data_offset": 2048, 00:19:38.102 "data_size": 63488 00:19:38.102 } 00:19:38.102 ] 00:19:38.102 }' 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.102 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.670 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.671 "name": "raid_bdev1", 00:19:38.671 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:38.671 "strip_size_kb": 64, 00:19:38.671 "state": "online", 00:19:38.671 "raid_level": "raid5f", 00:19:38.671 "superblock": true, 00:19:38.671 "num_base_bdevs": 4, 00:19:38.671 "num_base_bdevs_discovered": 4, 00:19:38.671 "num_base_bdevs_operational": 4, 00:19:38.671 "base_bdevs_list": [ 00:19:38.671 { 00:19:38.671 "name": "spare", 00:19:38.671 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:38.671 "is_configured": true, 00:19:38.671 "data_offset": 2048, 00:19:38.671 "data_size": 63488 00:19:38.671 }, 00:19:38.671 { 00:19:38.671 "name": "BaseBdev2", 00:19:38.671 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:38.671 "is_configured": true, 00:19:38.671 "data_offset": 2048, 00:19:38.671 "data_size": 63488 00:19:38.671 }, 00:19:38.671 { 00:19:38.671 "name": "BaseBdev3", 00:19:38.671 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:38.671 "is_configured": true, 00:19:38.671 "data_offset": 2048, 00:19:38.671 "data_size": 63488 00:19:38.671 }, 00:19:38.671 { 00:19:38.671 "name": "BaseBdev4", 00:19:38.671 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:38.671 "is_configured": true, 00:19:38.671 "data_offset": 2048, 00:19:38.671 "data_size": 63488 00:19:38.671 } 00:19:38.671 ] 00:19:38.671 }' 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.671 [2024-10-08 16:27:31.935976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.671 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.929 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.929 "name": "raid_bdev1", 00:19:38.929 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:38.929 "strip_size_kb": 64, 00:19:38.929 "state": "online", 00:19:38.929 "raid_level": "raid5f", 00:19:38.929 "superblock": true, 00:19:38.929 "num_base_bdevs": 4, 00:19:38.929 "num_base_bdevs_discovered": 3, 00:19:38.929 "num_base_bdevs_operational": 3, 00:19:38.929 "base_bdevs_list": [ 00:19:38.929 { 00:19:38.929 "name": null, 00:19:38.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.929 "is_configured": false, 00:19:38.929 "data_offset": 0, 00:19:38.929 "data_size": 63488 00:19:38.929 }, 00:19:38.929 { 00:19:38.929 "name": "BaseBdev2", 00:19:38.929 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:38.929 "is_configured": true, 00:19:38.929 "data_offset": 2048, 00:19:38.929 "data_size": 63488 00:19:38.929 }, 00:19:38.929 { 00:19:38.929 "name": "BaseBdev3", 00:19:38.929 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:38.929 "is_configured": true, 00:19:38.929 "data_offset": 2048, 00:19:38.929 "data_size": 63488 00:19:38.929 }, 00:19:38.929 { 00:19:38.929 "name": "BaseBdev4", 00:19:38.929 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:38.929 "is_configured": true, 00:19:38.929 "data_offset": 2048, 00:19:38.929 "data_size": 63488 00:19:38.929 } 00:19:38.929 ] 00:19:38.929 }' 00:19:38.929 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.929 16:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.187 16:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.187 16:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.187 16:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.187 [2024-10-08 16:27:32.468153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.187 [2024-10-08 16:27:32.468414] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:39.187 [2024-10-08 16:27:32.468451] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:39.187 [2024-10-08 16:27:32.468502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.187 [2024-10-08 16:27:32.481613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:39.187 16:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.187 16:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:39.187 [2024-10-08 16:27:32.490721] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.564 "name": "raid_bdev1", 00:19:40.564 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:40.564 "strip_size_kb": 64, 00:19:40.564 "state": "online", 00:19:40.564 "raid_level": "raid5f", 00:19:40.564 "superblock": true, 00:19:40.564 "num_base_bdevs": 4, 00:19:40.564 "num_base_bdevs_discovered": 4, 00:19:40.564 "num_base_bdevs_operational": 4, 00:19:40.564 "process": { 00:19:40.564 "type": "rebuild", 00:19:40.564 "target": "spare", 00:19:40.564 "progress": { 00:19:40.564 "blocks": 17280, 00:19:40.564 "percent": 9 00:19:40.564 } 00:19:40.564 }, 00:19:40.564 "base_bdevs_list": [ 00:19:40.564 { 00:19:40.564 "name": "spare", 00:19:40.564 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:40.564 "is_configured": true, 00:19:40.564 "data_offset": 2048, 00:19:40.564 "data_size": 63488 00:19:40.564 }, 00:19:40.564 { 00:19:40.564 "name": "BaseBdev2", 00:19:40.564 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:40.564 "is_configured": true, 00:19:40.564 "data_offset": 2048, 00:19:40.564 "data_size": 63488 00:19:40.564 }, 00:19:40.564 { 00:19:40.564 "name": "BaseBdev3", 00:19:40.564 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:40.564 "is_configured": true, 00:19:40.564 "data_offset": 2048, 00:19:40.564 "data_size": 63488 00:19:40.564 }, 00:19:40.564 { 00:19:40.564 "name": "BaseBdev4", 00:19:40.564 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:40.564 "is_configured": true, 00:19:40.564 "data_offset": 2048, 00:19:40.564 "data_size": 63488 00:19:40.564 } 00:19:40.564 ] 00:19:40.564 }' 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.564 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.565 [2024-10-08 16:27:33.652233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.565 [2024-10-08 16:27:33.703638] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:40.565 [2024-10-08 16:27:33.703800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.565 [2024-10-08 16:27:33.703828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.565 [2024-10-08 16:27:33.703849] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.565 "name": "raid_bdev1", 00:19:40.565 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:40.565 "strip_size_kb": 64, 00:19:40.565 "state": "online", 00:19:40.565 "raid_level": "raid5f", 00:19:40.565 "superblock": true, 00:19:40.565 "num_base_bdevs": 4, 00:19:40.565 "num_base_bdevs_discovered": 3, 00:19:40.565 "num_base_bdevs_operational": 3, 00:19:40.565 "base_bdevs_list": [ 00:19:40.565 { 00:19:40.565 "name": null, 00:19:40.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.565 "is_configured": false, 00:19:40.565 "data_offset": 0, 00:19:40.565 "data_size": 63488 00:19:40.565 }, 00:19:40.565 { 00:19:40.565 "name": "BaseBdev2", 00:19:40.565 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:40.565 "is_configured": true, 00:19:40.565 "data_offset": 2048, 00:19:40.565 "data_size": 63488 00:19:40.565 }, 00:19:40.565 { 00:19:40.565 "name": "BaseBdev3", 00:19:40.565 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:40.565 "is_configured": true, 00:19:40.565 "data_offset": 2048, 00:19:40.565 "data_size": 63488 00:19:40.565 }, 00:19:40.565 { 00:19:40.565 "name": "BaseBdev4", 00:19:40.565 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:40.565 "is_configured": true, 00:19:40.565 "data_offset": 2048, 00:19:40.565 "data_size": 63488 00:19:40.565 } 00:19:40.565 ] 00:19:40.565 }' 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.565 16:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.137 16:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:41.137 16:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.137 16:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.137 [2024-10-08 16:27:34.273071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:41.137 [2024-10-08 16:27:34.273189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.137 [2024-10-08 16:27:34.273229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:41.137 [2024-10-08 16:27:34.273249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.137 [2024-10-08 16:27:34.273875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.137 [2024-10-08 16:27:34.273934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:41.137 [2024-10-08 16:27:34.274058] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:41.137 [2024-10-08 16:27:34.274083] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.137 [2024-10-08 16:27:34.274098] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:41.137 [2024-10-08 16:27:34.274139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.137 [2024-10-08 16:27:34.287094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:41.137 spare 00:19:41.137 16:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.137 16:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:41.137 [2024-10-08 16:27:34.295920] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.071 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.071 "name": "raid_bdev1", 00:19:42.071 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:42.071 "strip_size_kb": 64, 00:19:42.071 "state": "online", 00:19:42.071 "raid_level": "raid5f", 00:19:42.071 "superblock": true, 00:19:42.071 "num_base_bdevs": 4, 00:19:42.071 "num_base_bdevs_discovered": 4, 00:19:42.071 "num_base_bdevs_operational": 4, 00:19:42.071 "process": { 00:19:42.071 "type": "rebuild", 00:19:42.071 "target": "spare", 00:19:42.071 "progress": { 00:19:42.071 "blocks": 17280, 00:19:42.071 "percent": 9 00:19:42.071 } 00:19:42.071 }, 00:19:42.071 "base_bdevs_list": [ 00:19:42.071 { 00:19:42.071 "name": "spare", 00:19:42.072 "uuid": "ac173cb3-017b-5ae4-9548-10569a86ea31", 00:19:42.072 "is_configured": true, 00:19:42.072 "data_offset": 2048, 00:19:42.072 "data_size": 63488 00:19:42.072 }, 00:19:42.072 { 00:19:42.072 "name": "BaseBdev2", 00:19:42.072 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:42.072 "is_configured": true, 00:19:42.072 "data_offset": 2048, 00:19:42.072 "data_size": 63488 00:19:42.072 }, 00:19:42.072 { 00:19:42.072 "name": "BaseBdev3", 00:19:42.072 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:42.072 "is_configured": true, 00:19:42.072 "data_offset": 2048, 00:19:42.072 "data_size": 63488 00:19:42.072 }, 00:19:42.072 { 00:19:42.072 "name": "BaseBdev4", 00:19:42.072 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:42.072 "is_configured": true, 00:19:42.072 "data_offset": 2048, 00:19:42.072 "data_size": 63488 00:19:42.072 } 00:19:42.072 ] 00:19:42.072 }' 00:19:42.072 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.330 [2024-10-08 16:27:35.461677] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:42.330 [2024-10-08 16:27:35.508705] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:42.330 [2024-10-08 16:27:35.508809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.330 [2024-10-08 16:27:35.508840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:42.330 [2024-10-08 16:27:35.508852] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.330 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.331 "name": "raid_bdev1", 00:19:42.331 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:42.331 "strip_size_kb": 64, 00:19:42.331 "state": "online", 00:19:42.331 "raid_level": "raid5f", 00:19:42.331 "superblock": true, 00:19:42.331 "num_base_bdevs": 4, 00:19:42.331 "num_base_bdevs_discovered": 3, 00:19:42.331 "num_base_bdevs_operational": 3, 00:19:42.331 "base_bdevs_list": [ 00:19:42.331 { 00:19:42.331 "name": null, 00:19:42.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.331 "is_configured": false, 00:19:42.331 "data_offset": 0, 00:19:42.331 "data_size": 63488 00:19:42.331 }, 00:19:42.331 { 00:19:42.331 "name": "BaseBdev2", 00:19:42.331 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:42.331 "is_configured": true, 00:19:42.331 "data_offset": 2048, 00:19:42.331 "data_size": 63488 00:19:42.331 }, 00:19:42.331 { 00:19:42.331 "name": "BaseBdev3", 00:19:42.331 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:42.331 "is_configured": true, 00:19:42.331 "data_offset": 2048, 00:19:42.331 "data_size": 63488 00:19:42.331 }, 00:19:42.331 { 00:19:42.331 "name": "BaseBdev4", 00:19:42.331 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:42.331 "is_configured": true, 00:19:42.331 "data_offset": 2048, 00:19:42.331 "data_size": 63488 00:19:42.331 } 00:19:42.331 ] 00:19:42.331 }' 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.331 16:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.897 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.897 "name": "raid_bdev1", 00:19:42.897 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:42.897 "strip_size_kb": 64, 00:19:42.897 "state": "online", 00:19:42.897 "raid_level": "raid5f", 00:19:42.897 "superblock": true, 00:19:42.897 "num_base_bdevs": 4, 00:19:42.897 "num_base_bdevs_discovered": 3, 00:19:42.897 "num_base_bdevs_operational": 3, 00:19:42.897 "base_bdevs_list": [ 00:19:42.897 { 00:19:42.897 "name": null, 00:19:42.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.897 "is_configured": false, 00:19:42.897 "data_offset": 0, 00:19:42.897 "data_size": 63488 00:19:42.897 }, 00:19:42.897 { 00:19:42.897 "name": "BaseBdev2", 00:19:42.897 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:42.897 "is_configured": true, 00:19:42.897 "data_offset": 2048, 00:19:42.897 "data_size": 63488 00:19:42.897 }, 00:19:42.897 { 00:19:42.897 "name": "BaseBdev3", 00:19:42.897 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:42.897 "is_configured": true, 00:19:42.897 "data_offset": 2048, 00:19:42.897 "data_size": 63488 00:19:42.897 }, 00:19:42.897 { 00:19:42.898 "name": "BaseBdev4", 00:19:42.898 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:42.898 "is_configured": true, 00:19:42.898 "data_offset": 2048, 00:19:42.898 "data_size": 63488 00:19:42.898 } 00:19:42.898 ] 00:19:42.898 }' 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.898 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.898 [2024-10-08 16:27:36.214929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:42.898 [2024-10-08 16:27:36.214993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.898 [2024-10-08 16:27:36.215023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:42.898 [2024-10-08 16:27:36.215037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.898 [2024-10-08 16:27:36.215665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.898 [2024-10-08 16:27:36.215708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:42.898 [2024-10-08 16:27:36.215819] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:42.898 [2024-10-08 16:27:36.215840] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:42.898 [2024-10-08 16:27:36.215855] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:42.898 [2024-10-08 16:27:36.215868] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:43.156 BaseBdev1 00:19:43.156 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.156 16:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:44.090 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:44.090 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.090 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.090 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.091 "name": "raid_bdev1", 00:19:44.091 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:44.091 "strip_size_kb": 64, 00:19:44.091 "state": "online", 00:19:44.091 "raid_level": "raid5f", 00:19:44.091 "superblock": true, 00:19:44.091 "num_base_bdevs": 4, 00:19:44.091 "num_base_bdevs_discovered": 3, 00:19:44.091 "num_base_bdevs_operational": 3, 00:19:44.091 "base_bdevs_list": [ 00:19:44.091 { 00:19:44.091 "name": null, 00:19:44.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.091 "is_configured": false, 00:19:44.091 "data_offset": 0, 00:19:44.091 "data_size": 63488 00:19:44.091 }, 00:19:44.091 { 00:19:44.091 "name": "BaseBdev2", 00:19:44.091 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:44.091 "is_configured": true, 00:19:44.091 "data_offset": 2048, 00:19:44.091 "data_size": 63488 00:19:44.091 }, 00:19:44.091 { 00:19:44.091 "name": "BaseBdev3", 00:19:44.091 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:44.091 "is_configured": true, 00:19:44.091 "data_offset": 2048, 00:19:44.091 "data_size": 63488 00:19:44.091 }, 00:19:44.091 { 00:19:44.091 "name": "BaseBdev4", 00:19:44.091 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:44.091 "is_configured": true, 00:19:44.091 "data_offset": 2048, 00:19:44.091 "data_size": 63488 00:19:44.091 } 00:19:44.091 ] 00:19:44.091 }' 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.091 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.657 "name": "raid_bdev1", 00:19:44.657 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:44.657 "strip_size_kb": 64, 00:19:44.657 "state": "online", 00:19:44.657 "raid_level": "raid5f", 00:19:44.657 "superblock": true, 00:19:44.657 "num_base_bdevs": 4, 00:19:44.657 "num_base_bdevs_discovered": 3, 00:19:44.657 "num_base_bdevs_operational": 3, 00:19:44.657 "base_bdevs_list": [ 00:19:44.657 { 00:19:44.657 "name": null, 00:19:44.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.657 "is_configured": false, 00:19:44.657 "data_offset": 0, 00:19:44.657 "data_size": 63488 00:19:44.657 }, 00:19:44.657 { 00:19:44.657 "name": "BaseBdev2", 00:19:44.657 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:44.657 "is_configured": true, 00:19:44.657 "data_offset": 2048, 00:19:44.657 "data_size": 63488 00:19:44.657 }, 00:19:44.657 { 00:19:44.657 "name": "BaseBdev3", 00:19:44.657 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:44.657 "is_configured": true, 00:19:44.657 "data_offset": 2048, 00:19:44.657 "data_size": 63488 00:19:44.657 }, 00:19:44.657 { 00:19:44.657 "name": "BaseBdev4", 00:19:44.657 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:44.657 "is_configured": true, 00:19:44.657 "data_offset": 2048, 00:19:44.657 "data_size": 63488 00:19:44.657 } 00:19:44.657 ] 00:19:44.657 }' 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.657 [2024-10-08 16:27:37.899443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.657 [2024-10-08 16:27:37.899676] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:44.657 [2024-10-08 16:27:37.899713] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:44.657 request: 00:19:44.657 { 00:19:44.657 "base_bdev": "BaseBdev1", 00:19:44.657 "raid_bdev": "raid_bdev1", 00:19:44.657 "method": "bdev_raid_add_base_bdev", 00:19:44.657 "req_id": 1 00:19:44.657 } 00:19:44.657 Got JSON-RPC error response 00:19:44.657 response: 00:19:44.657 { 00:19:44.657 "code": -22, 00:19:44.657 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:44.657 } 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.657 16:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.597 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.855 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.855 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.855 "name": "raid_bdev1", 00:19:45.855 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:45.855 "strip_size_kb": 64, 00:19:45.855 "state": "online", 00:19:45.855 "raid_level": "raid5f", 00:19:45.855 "superblock": true, 00:19:45.855 "num_base_bdevs": 4, 00:19:45.855 "num_base_bdevs_discovered": 3, 00:19:45.855 "num_base_bdevs_operational": 3, 00:19:45.855 "base_bdevs_list": [ 00:19:45.855 { 00:19:45.855 "name": null, 00:19:45.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.855 "is_configured": false, 00:19:45.855 "data_offset": 0, 00:19:45.855 "data_size": 63488 00:19:45.855 }, 00:19:45.855 { 00:19:45.855 "name": "BaseBdev2", 00:19:45.855 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:45.855 "is_configured": true, 00:19:45.855 "data_offset": 2048, 00:19:45.855 "data_size": 63488 00:19:45.855 }, 00:19:45.855 { 00:19:45.855 "name": "BaseBdev3", 00:19:45.855 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:45.855 "is_configured": true, 00:19:45.855 "data_offset": 2048, 00:19:45.855 "data_size": 63488 00:19:45.855 }, 00:19:45.855 { 00:19:45.855 "name": "BaseBdev4", 00:19:45.855 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:45.855 "is_configured": true, 00:19:45.855 "data_offset": 2048, 00:19:45.855 "data_size": 63488 00:19:45.855 } 00:19:45.855 ] 00:19:45.855 }' 00:19:45.855 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.855 16:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.113 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.114 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.114 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.372 "name": "raid_bdev1", 00:19:46.372 "uuid": "01572688-6800-4f7c-b850-c868b66fb600", 00:19:46.372 "strip_size_kb": 64, 00:19:46.372 "state": "online", 00:19:46.372 "raid_level": "raid5f", 00:19:46.372 "superblock": true, 00:19:46.372 "num_base_bdevs": 4, 00:19:46.372 "num_base_bdevs_discovered": 3, 00:19:46.372 "num_base_bdevs_operational": 3, 00:19:46.372 "base_bdevs_list": [ 00:19:46.372 { 00:19:46.372 "name": null, 00:19:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.372 "is_configured": false, 00:19:46.372 "data_offset": 0, 00:19:46.372 "data_size": 63488 00:19:46.372 }, 00:19:46.372 { 00:19:46.372 "name": "BaseBdev2", 00:19:46.372 "uuid": "73325858-ca9b-5dc1-b7a8-c8d948aecc60", 00:19:46.372 "is_configured": true, 00:19:46.372 "data_offset": 2048, 00:19:46.372 "data_size": 63488 00:19:46.372 }, 00:19:46.372 { 00:19:46.372 "name": "BaseBdev3", 00:19:46.372 "uuid": "de0cd34e-a608-53fb-94a5-1b311a328afb", 00:19:46.372 "is_configured": true, 00:19:46.372 "data_offset": 2048, 00:19:46.372 "data_size": 63488 00:19:46.372 }, 00:19:46.372 { 00:19:46.372 "name": "BaseBdev4", 00:19:46.372 "uuid": "8dd0b8af-5cb8-57da-b5de-6f3641a59126", 00:19:46.372 "is_configured": true, 00:19:46.372 "data_offset": 2048, 00:19:46.372 "data_size": 63488 00:19:46.372 } 00:19:46.372 ] 00:19:46.372 }' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85899 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85899 ']' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85899 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85899 00:19:46.372 killing process with pid 85899 00:19:46.372 Received shutdown signal, test time was about 60.000000 seconds 00:19:46.372 00:19:46.372 Latency(us) 00:19:46.372 [2024-10-08T16:27:39.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.372 [2024-10-08T16:27:39.694Z] =================================================================================================================== 00:19:46.372 [2024-10-08T16:27:39.694Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85899' 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85899 00:19:46.372 [2024-10-08 16:27:39.622240] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:46.372 16:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85899 00:19:46.372 [2024-10-08 16:27:39.622391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:46.372 [2024-10-08 16:27:39.622496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:46.372 [2024-10-08 16:27:39.622534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:46.938 [2024-10-08 16:27:40.067949] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:48.313 16:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:48.313 00:19:48.313 real 0m28.750s 00:19:48.313 user 0m37.335s 00:19:48.313 sys 0m2.904s 00:19:48.313 16:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:48.313 ************************************ 00:19:48.313 END TEST raid5f_rebuild_test_sb 00:19:48.313 16:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 ************************************ 00:19:48.313 16:27:41 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:48.313 16:27:41 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:48.313 16:27:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:48.313 16:27:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:48.313 16:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 ************************************ 00:19:48.313 START TEST raid_state_function_test_sb_4k 00:19:48.313 ************************************ 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86719 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86719' 00:19:48.313 Process raid pid: 86719 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86719 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86719 ']' 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.313 16:27:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 [2024-10-08 16:27:41.491092] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:48.313 [2024-10-08 16:27:41.491400] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.572 [2024-10-08 16:27:41.673374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.829 [2024-10-08 16:27:41.914501] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.829 [2024-10-08 16:27:42.122359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.829 [2024-10-08 16:27:42.122431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.395 [2024-10-08 16:27:42.432601] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:49.395 [2024-10-08 16:27:42.432665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:49.395 [2024-10-08 16:27:42.432682] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:49.395 [2024-10-08 16:27:42.432701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.395 "name": "Existed_Raid", 00:19:49.395 "uuid": "4d55b738-2b75-451b-9ab9-0ddd3eca9423", 00:19:49.395 "strip_size_kb": 0, 00:19:49.395 "state": "configuring", 00:19:49.395 "raid_level": "raid1", 00:19:49.395 "superblock": true, 00:19:49.395 "num_base_bdevs": 2, 00:19:49.395 "num_base_bdevs_discovered": 0, 00:19:49.395 "num_base_bdevs_operational": 2, 00:19:49.395 "base_bdevs_list": [ 00:19:49.395 { 00:19:49.395 "name": "BaseBdev1", 00:19:49.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.395 "is_configured": false, 00:19:49.395 "data_offset": 0, 00:19:49.395 "data_size": 0 00:19:49.395 }, 00:19:49.395 { 00:19:49.395 "name": "BaseBdev2", 00:19:49.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.395 "is_configured": false, 00:19:49.395 "data_offset": 0, 00:19:49.395 "data_size": 0 00:19:49.395 } 00:19:49.395 ] 00:19:49.395 }' 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.395 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.653 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:49.653 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.653 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.911 [2024-10-08 16:27:42.976607] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:49.911 [2024-10-08 16:27:42.976654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.911 [2024-10-08 16:27:42.984646] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:49.911 [2024-10-08 16:27:42.984699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:49.911 [2024-10-08 16:27:42.984714] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:49.911 [2024-10-08 16:27:42.984732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.911 16:27:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.911 [2024-10-08 16:27:43.042391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.911 BaseBdev1 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.911 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.911 [ 00:19:49.911 { 00:19:49.911 "name": "BaseBdev1", 00:19:49.911 "aliases": [ 00:19:49.911 "ff90d9fa-ca89-4480-ba3b-92c304b7e21d" 00:19:49.911 ], 00:19:49.911 "product_name": "Malloc disk", 00:19:49.911 "block_size": 4096, 00:19:49.911 "num_blocks": 8192, 00:19:49.911 "uuid": "ff90d9fa-ca89-4480-ba3b-92c304b7e21d", 00:19:49.911 "assigned_rate_limits": { 00:19:49.911 "rw_ios_per_sec": 0, 00:19:49.911 "rw_mbytes_per_sec": 0, 00:19:49.911 "r_mbytes_per_sec": 0, 00:19:49.911 "w_mbytes_per_sec": 0 00:19:49.911 }, 00:19:49.911 "claimed": true, 00:19:49.911 "claim_type": "exclusive_write", 00:19:49.911 "zoned": false, 00:19:49.911 "supported_io_types": { 00:19:49.911 "read": true, 00:19:49.911 "write": true, 00:19:49.911 "unmap": true, 00:19:49.911 "flush": true, 00:19:49.911 "reset": true, 00:19:49.911 "nvme_admin": false, 00:19:49.911 "nvme_io": false, 00:19:49.911 "nvme_io_md": false, 00:19:49.911 "write_zeroes": true, 00:19:49.911 "zcopy": true, 00:19:49.911 "get_zone_info": false, 00:19:49.911 "zone_management": false, 00:19:49.911 "zone_append": false, 00:19:49.912 "compare": false, 00:19:49.912 "compare_and_write": false, 00:19:49.912 "abort": true, 00:19:49.912 "seek_hole": false, 00:19:49.912 "seek_data": false, 00:19:49.912 "copy": true, 00:19:49.912 "nvme_iov_md": false 00:19:49.912 }, 00:19:49.912 "memory_domains": [ 00:19:49.912 { 00:19:49.912 "dma_device_id": "system", 00:19:49.912 "dma_device_type": 1 00:19:49.912 }, 00:19:49.912 { 00:19:49.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.912 "dma_device_type": 2 00:19:49.912 } 00:19:49.912 ], 00:19:49.912 "driver_specific": {} 00:19:49.912 } 00:19:49.912 ] 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.912 "name": "Existed_Raid", 00:19:49.912 "uuid": "ba7c0343-6d75-4a70-a922-08bafd5d2d1c", 00:19:49.912 "strip_size_kb": 0, 00:19:49.912 "state": "configuring", 00:19:49.912 "raid_level": "raid1", 00:19:49.912 "superblock": true, 00:19:49.912 "num_base_bdevs": 2, 00:19:49.912 "num_base_bdevs_discovered": 1, 00:19:49.912 "num_base_bdevs_operational": 2, 00:19:49.912 "base_bdevs_list": [ 00:19:49.912 { 00:19:49.912 "name": "BaseBdev1", 00:19:49.912 "uuid": "ff90d9fa-ca89-4480-ba3b-92c304b7e21d", 00:19:49.912 "is_configured": true, 00:19:49.912 "data_offset": 256, 00:19:49.912 "data_size": 7936 00:19:49.912 }, 00:19:49.912 { 00:19:49.912 "name": "BaseBdev2", 00:19:49.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.912 "is_configured": false, 00:19:49.912 "data_offset": 0, 00:19:49.912 "data_size": 0 00:19:49.912 } 00:19:49.912 ] 00:19:49.912 }' 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.912 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.478 [2024-10-08 16:27:43.606594] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:50.478 [2024-10-08 16:27:43.606660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.478 [2024-10-08 16:27:43.614619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:50.478 [2024-10-08 16:27:43.617127] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.478 [2024-10-08 16:27:43.617192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.478 "name": "Existed_Raid", 00:19:50.478 "uuid": "43294837-a7ea-4bb0-8aac-9f6c6a0237f9", 00:19:50.478 "strip_size_kb": 0, 00:19:50.478 "state": "configuring", 00:19:50.478 "raid_level": "raid1", 00:19:50.478 "superblock": true, 00:19:50.478 "num_base_bdevs": 2, 00:19:50.478 "num_base_bdevs_discovered": 1, 00:19:50.478 "num_base_bdevs_operational": 2, 00:19:50.478 "base_bdevs_list": [ 00:19:50.478 { 00:19:50.478 "name": "BaseBdev1", 00:19:50.478 "uuid": "ff90d9fa-ca89-4480-ba3b-92c304b7e21d", 00:19:50.478 "is_configured": true, 00:19:50.478 "data_offset": 256, 00:19:50.478 "data_size": 7936 00:19:50.478 }, 00:19:50.478 { 00:19:50.478 "name": "BaseBdev2", 00:19:50.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.478 "is_configured": false, 00:19:50.478 "data_offset": 0, 00:19:50.478 "data_size": 0 00:19:50.478 } 00:19:50.478 ] 00:19:50.478 }' 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.478 16:27:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.046 [2024-10-08 16:27:44.178297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.046 [2024-10-08 16:27:44.178662] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:51.046 [2024-10-08 16:27:44.178682] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:51.046 [2024-10-08 16:27:44.179029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:51.046 BaseBdev2 00:19:51.046 [2024-10-08 16:27:44.179235] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:51.046 [2024-10-08 16:27:44.179257] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:51.046 [2024-10-08 16:27:44.179431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.046 [ 00:19:51.046 { 00:19:51.046 "name": "BaseBdev2", 00:19:51.046 "aliases": [ 00:19:51.046 "a143d4e5-38fd-47f6-9bec-6eb4398b6237" 00:19:51.046 ], 00:19:51.046 "product_name": "Malloc disk", 00:19:51.046 "block_size": 4096, 00:19:51.046 "num_blocks": 8192, 00:19:51.046 "uuid": "a143d4e5-38fd-47f6-9bec-6eb4398b6237", 00:19:51.046 "assigned_rate_limits": { 00:19:51.046 "rw_ios_per_sec": 0, 00:19:51.046 "rw_mbytes_per_sec": 0, 00:19:51.046 "r_mbytes_per_sec": 0, 00:19:51.046 "w_mbytes_per_sec": 0 00:19:51.046 }, 00:19:51.046 "claimed": true, 00:19:51.046 "claim_type": "exclusive_write", 00:19:51.046 "zoned": false, 00:19:51.046 "supported_io_types": { 00:19:51.046 "read": true, 00:19:51.046 "write": true, 00:19:51.046 "unmap": true, 00:19:51.046 "flush": true, 00:19:51.046 "reset": true, 00:19:51.046 "nvme_admin": false, 00:19:51.046 "nvme_io": false, 00:19:51.046 "nvme_io_md": false, 00:19:51.046 "write_zeroes": true, 00:19:51.046 "zcopy": true, 00:19:51.046 "get_zone_info": false, 00:19:51.046 "zone_management": false, 00:19:51.046 "zone_append": false, 00:19:51.046 "compare": false, 00:19:51.046 "compare_and_write": false, 00:19:51.046 "abort": true, 00:19:51.046 "seek_hole": false, 00:19:51.046 "seek_data": false, 00:19:51.046 "copy": true, 00:19:51.046 "nvme_iov_md": false 00:19:51.046 }, 00:19:51.046 "memory_domains": [ 00:19:51.046 { 00:19:51.046 "dma_device_id": "system", 00:19:51.046 "dma_device_type": 1 00:19:51.046 }, 00:19:51.046 { 00:19:51.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.046 "dma_device_type": 2 00:19:51.046 } 00:19:51.046 ], 00:19:51.046 "driver_specific": {} 00:19:51.046 } 00:19:51.046 ] 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.046 "name": "Existed_Raid", 00:19:51.046 "uuid": "43294837-a7ea-4bb0-8aac-9f6c6a0237f9", 00:19:51.046 "strip_size_kb": 0, 00:19:51.046 "state": "online", 00:19:51.046 "raid_level": "raid1", 00:19:51.046 "superblock": true, 00:19:51.046 "num_base_bdevs": 2, 00:19:51.046 "num_base_bdevs_discovered": 2, 00:19:51.046 "num_base_bdevs_operational": 2, 00:19:51.046 "base_bdevs_list": [ 00:19:51.046 { 00:19:51.046 "name": "BaseBdev1", 00:19:51.046 "uuid": "ff90d9fa-ca89-4480-ba3b-92c304b7e21d", 00:19:51.046 "is_configured": true, 00:19:51.046 "data_offset": 256, 00:19:51.046 "data_size": 7936 00:19:51.046 }, 00:19:51.046 { 00:19:51.046 "name": "BaseBdev2", 00:19:51.046 "uuid": "a143d4e5-38fd-47f6-9bec-6eb4398b6237", 00:19:51.046 "is_configured": true, 00:19:51.046 "data_offset": 256, 00:19:51.046 "data_size": 7936 00:19:51.046 } 00:19:51.046 ] 00:19:51.046 }' 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.046 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:51.614 [2024-10-08 16:27:44.770966] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.614 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.614 "name": "Existed_Raid", 00:19:51.614 "aliases": [ 00:19:51.614 "43294837-a7ea-4bb0-8aac-9f6c6a0237f9" 00:19:51.614 ], 00:19:51.614 "product_name": "Raid Volume", 00:19:51.614 "block_size": 4096, 00:19:51.614 "num_blocks": 7936, 00:19:51.614 "uuid": "43294837-a7ea-4bb0-8aac-9f6c6a0237f9", 00:19:51.614 "assigned_rate_limits": { 00:19:51.614 "rw_ios_per_sec": 0, 00:19:51.615 "rw_mbytes_per_sec": 0, 00:19:51.615 "r_mbytes_per_sec": 0, 00:19:51.615 "w_mbytes_per_sec": 0 00:19:51.615 }, 00:19:51.615 "claimed": false, 00:19:51.615 "zoned": false, 00:19:51.615 "supported_io_types": { 00:19:51.615 "read": true, 00:19:51.615 "write": true, 00:19:51.615 "unmap": false, 00:19:51.615 "flush": false, 00:19:51.615 "reset": true, 00:19:51.615 "nvme_admin": false, 00:19:51.615 "nvme_io": false, 00:19:51.615 "nvme_io_md": false, 00:19:51.615 "write_zeroes": true, 00:19:51.615 "zcopy": false, 00:19:51.615 "get_zone_info": false, 00:19:51.615 "zone_management": false, 00:19:51.615 "zone_append": false, 00:19:51.615 "compare": false, 00:19:51.615 "compare_and_write": false, 00:19:51.615 "abort": false, 00:19:51.615 "seek_hole": false, 00:19:51.615 "seek_data": false, 00:19:51.615 "copy": false, 00:19:51.615 "nvme_iov_md": false 00:19:51.615 }, 00:19:51.615 "memory_domains": [ 00:19:51.615 { 00:19:51.615 "dma_device_id": "system", 00:19:51.615 "dma_device_type": 1 00:19:51.615 }, 00:19:51.615 { 00:19:51.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.615 "dma_device_type": 2 00:19:51.615 }, 00:19:51.615 { 00:19:51.615 "dma_device_id": "system", 00:19:51.615 "dma_device_type": 1 00:19:51.615 }, 00:19:51.615 { 00:19:51.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.615 "dma_device_type": 2 00:19:51.615 } 00:19:51.615 ], 00:19:51.615 "driver_specific": { 00:19:51.615 "raid": { 00:19:51.615 "uuid": "43294837-a7ea-4bb0-8aac-9f6c6a0237f9", 00:19:51.615 "strip_size_kb": 0, 00:19:51.615 "state": "online", 00:19:51.615 "raid_level": "raid1", 00:19:51.615 "superblock": true, 00:19:51.615 "num_base_bdevs": 2, 00:19:51.615 "num_base_bdevs_discovered": 2, 00:19:51.615 "num_base_bdevs_operational": 2, 00:19:51.615 "base_bdevs_list": [ 00:19:51.615 { 00:19:51.615 "name": "BaseBdev1", 00:19:51.615 "uuid": "ff90d9fa-ca89-4480-ba3b-92c304b7e21d", 00:19:51.615 "is_configured": true, 00:19:51.615 "data_offset": 256, 00:19:51.615 "data_size": 7936 00:19:51.615 }, 00:19:51.615 { 00:19:51.615 "name": "BaseBdev2", 00:19:51.615 "uuid": "a143d4e5-38fd-47f6-9bec-6eb4398b6237", 00:19:51.615 "is_configured": true, 00:19:51.615 "data_offset": 256, 00:19:51.615 "data_size": 7936 00:19:51.615 } 00:19:51.615 ] 00:19:51.615 } 00:19:51.615 } 00:19:51.615 }' 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:51.615 BaseBdev2' 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.615 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.873 16:27:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.873 [2024-10-08 16:27:45.050778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:51.873 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.874 "name": "Existed_Raid", 00:19:51.874 "uuid": "43294837-a7ea-4bb0-8aac-9f6c6a0237f9", 00:19:51.874 "strip_size_kb": 0, 00:19:51.874 "state": "online", 00:19:51.874 "raid_level": "raid1", 00:19:51.874 "superblock": true, 00:19:51.874 "num_base_bdevs": 2, 00:19:51.874 "num_base_bdevs_discovered": 1, 00:19:51.874 "num_base_bdevs_operational": 1, 00:19:51.874 "base_bdevs_list": [ 00:19:51.874 { 00:19:51.874 "name": null, 00:19:51.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.874 "is_configured": false, 00:19:51.874 "data_offset": 0, 00:19:51.874 "data_size": 7936 00:19:51.874 }, 00:19:51.874 { 00:19:51.874 "name": "BaseBdev2", 00:19:51.874 "uuid": "a143d4e5-38fd-47f6-9bec-6eb4398b6237", 00:19:51.874 "is_configured": true, 00:19:51.874 "data_offset": 256, 00:19:51.874 "data_size": 7936 00:19:51.874 } 00:19:51.874 ] 00:19:51.874 }' 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.874 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.441 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.441 [2024-10-08 16:27:45.709312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:52.441 [2024-10-08 16:27:45.709439] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.699 [2024-10-08 16:27:45.794952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.699 [2024-10-08 16:27:45.795022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.699 [2024-10-08 16:27:45.795043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86719 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86719 ']' 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86719 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86719 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.699 killing process with pid 86719 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86719' 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86719 00:19:52.699 [2024-10-08 16:27:45.889688] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.699 16:27:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86719 00:19:52.700 [2024-10-08 16:27:45.904618] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.075 16:27:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:54.075 00:19:54.075 real 0m5.789s 00:19:54.075 user 0m8.602s 00:19:54.075 sys 0m0.896s 00:19:54.075 16:27:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:54.075 16:27:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.075 ************************************ 00:19:54.075 END TEST raid_state_function_test_sb_4k 00:19:54.075 ************************************ 00:19:54.075 16:27:47 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:54.075 16:27:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:54.075 16:27:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:54.075 16:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.075 ************************************ 00:19:54.075 START TEST raid_superblock_test_4k 00:19:54.075 ************************************ 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86977 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86977 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86977 ']' 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.075 16:27:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.075 [2024-10-08 16:27:47.285086] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:54.075 [2024-10-08 16:27:47.285276] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86977 ] 00:19:54.333 [2024-10-08 16:27:47.454346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.592 [2024-10-08 16:27:47.697047] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.592 [2024-10-08 16:27:47.903354] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.592 [2024-10-08 16:27:47.903440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.183 malloc1 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.183 [2024-10-08 16:27:48.294668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:55.183 [2024-10-08 16:27:48.294757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.183 [2024-10-08 16:27:48.294792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:55.183 [2024-10-08 16:27:48.294810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.183 [2024-10-08 16:27:48.297655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.183 [2024-10-08 16:27:48.297700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:55.183 pt1 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.183 malloc2 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.183 [2024-10-08 16:27:48.357744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:55.183 [2024-10-08 16:27:48.357830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.183 [2024-10-08 16:27:48.357876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:55.183 [2024-10-08 16:27:48.357891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.183 [2024-10-08 16:27:48.360804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.183 [2024-10-08 16:27:48.360861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:55.183 pt2 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.183 [2024-10-08 16:27:48.369822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:55.183 [2024-10-08 16:27:48.372217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:55.183 [2024-10-08 16:27:48.372463] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:55.183 [2024-10-08 16:27:48.372490] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:55.183 [2024-10-08 16:27:48.372832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:55.183 [2024-10-08 16:27:48.373089] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:55.183 [2024-10-08 16:27:48.373118] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:55.183 [2024-10-08 16:27:48.373297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.183 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.184 "name": "raid_bdev1", 00:19:55.184 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:55.184 "strip_size_kb": 0, 00:19:55.184 "state": "online", 00:19:55.184 "raid_level": "raid1", 00:19:55.184 "superblock": true, 00:19:55.184 "num_base_bdevs": 2, 00:19:55.184 "num_base_bdevs_discovered": 2, 00:19:55.184 "num_base_bdevs_operational": 2, 00:19:55.184 "base_bdevs_list": [ 00:19:55.184 { 00:19:55.184 "name": "pt1", 00:19:55.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.184 "is_configured": true, 00:19:55.184 "data_offset": 256, 00:19:55.184 "data_size": 7936 00:19:55.184 }, 00:19:55.184 { 00:19:55.184 "name": "pt2", 00:19:55.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.184 "is_configured": true, 00:19:55.184 "data_offset": 256, 00:19:55.184 "data_size": 7936 00:19:55.184 } 00:19:55.184 ] 00:19:55.184 }' 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.184 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.750 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:55.750 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:55.751 [2024-10-08 16:27:48.918310] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.751 "name": "raid_bdev1", 00:19:55.751 "aliases": [ 00:19:55.751 "808f0889-f8f4-4661-8eab-0067856da22f" 00:19:55.751 ], 00:19:55.751 "product_name": "Raid Volume", 00:19:55.751 "block_size": 4096, 00:19:55.751 "num_blocks": 7936, 00:19:55.751 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:55.751 "assigned_rate_limits": { 00:19:55.751 "rw_ios_per_sec": 0, 00:19:55.751 "rw_mbytes_per_sec": 0, 00:19:55.751 "r_mbytes_per_sec": 0, 00:19:55.751 "w_mbytes_per_sec": 0 00:19:55.751 }, 00:19:55.751 "claimed": false, 00:19:55.751 "zoned": false, 00:19:55.751 "supported_io_types": { 00:19:55.751 "read": true, 00:19:55.751 "write": true, 00:19:55.751 "unmap": false, 00:19:55.751 "flush": false, 00:19:55.751 "reset": true, 00:19:55.751 "nvme_admin": false, 00:19:55.751 "nvme_io": false, 00:19:55.751 "nvme_io_md": false, 00:19:55.751 "write_zeroes": true, 00:19:55.751 "zcopy": false, 00:19:55.751 "get_zone_info": false, 00:19:55.751 "zone_management": false, 00:19:55.751 "zone_append": false, 00:19:55.751 "compare": false, 00:19:55.751 "compare_and_write": false, 00:19:55.751 "abort": false, 00:19:55.751 "seek_hole": false, 00:19:55.751 "seek_data": false, 00:19:55.751 "copy": false, 00:19:55.751 "nvme_iov_md": false 00:19:55.751 }, 00:19:55.751 "memory_domains": [ 00:19:55.751 { 00:19:55.751 "dma_device_id": "system", 00:19:55.751 "dma_device_type": 1 00:19:55.751 }, 00:19:55.751 { 00:19:55.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.751 "dma_device_type": 2 00:19:55.751 }, 00:19:55.751 { 00:19:55.751 "dma_device_id": "system", 00:19:55.751 "dma_device_type": 1 00:19:55.751 }, 00:19:55.751 { 00:19:55.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.751 "dma_device_type": 2 00:19:55.751 } 00:19:55.751 ], 00:19:55.751 "driver_specific": { 00:19:55.751 "raid": { 00:19:55.751 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:55.751 "strip_size_kb": 0, 00:19:55.751 "state": "online", 00:19:55.751 "raid_level": "raid1", 00:19:55.751 "superblock": true, 00:19:55.751 "num_base_bdevs": 2, 00:19:55.751 "num_base_bdevs_discovered": 2, 00:19:55.751 "num_base_bdevs_operational": 2, 00:19:55.751 "base_bdevs_list": [ 00:19:55.751 { 00:19:55.751 "name": "pt1", 00:19:55.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.751 "is_configured": true, 00:19:55.751 "data_offset": 256, 00:19:55.751 "data_size": 7936 00:19:55.751 }, 00:19:55.751 { 00:19:55.751 "name": "pt2", 00:19:55.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.751 "is_configured": true, 00:19:55.751 "data_offset": 256, 00:19:55.751 "data_size": 7936 00:19:55.751 } 00:19:55.751 ] 00:19:55.751 } 00:19:55.751 } 00:19:55.751 }' 00:19:55.751 16:27:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:55.751 pt2' 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.751 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:56.009 [2024-10-08 16:27:49.170315] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=808f0889-f8f4-4661-8eab-0067856da22f 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 808f0889-f8f4-4661-8eab-0067856da22f ']' 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 [2024-10-08 16:27:49.214056] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.009 [2024-10-08 16:27:49.214098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.009 [2024-10-08 16:27:49.214195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.009 [2024-10-08 16:27:49.214273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.009 [2024-10-08 16:27:49.214300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.009 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:56.010 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.010 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.010 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:56.010 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.268 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.268 [2024-10-08 16:27:49.342073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:56.268 [2024-10-08 16:27:49.344588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:56.268 [2024-10-08 16:27:49.344683] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:56.268 [2024-10-08 16:27:49.344757] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:56.268 [2024-10-08 16:27:49.344782] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.269 [2024-10-08 16:27:49.344797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:56.269 request: 00:19:56.269 { 00:19:56.269 "name": "raid_bdev1", 00:19:56.269 "raid_level": "raid1", 00:19:56.269 "base_bdevs": [ 00:19:56.269 "malloc1", 00:19:56.269 "malloc2" 00:19:56.269 ], 00:19:56.269 "superblock": false, 00:19:56.269 "method": "bdev_raid_create", 00:19:56.269 "req_id": 1 00:19:56.269 } 00:19:56.269 Got JSON-RPC error response 00:19:56.269 response: 00:19:56.269 { 00:19:56.269 "code": -17, 00:19:56.269 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:56.269 } 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.269 [2024-10-08 16:27:49.406111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:56.269 [2024-10-08 16:27:49.406183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.269 [2024-10-08 16:27:49.406208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:56.269 [2024-10-08 16:27:49.406226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.269 [2024-10-08 16:27:49.409209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.269 [2024-10-08 16:27:49.409257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:56.269 [2024-10-08 16:27:49.409366] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:56.269 [2024-10-08 16:27:49.409444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:56.269 pt1 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.269 "name": "raid_bdev1", 00:19:56.269 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:56.269 "strip_size_kb": 0, 00:19:56.269 "state": "configuring", 00:19:56.269 "raid_level": "raid1", 00:19:56.269 "superblock": true, 00:19:56.269 "num_base_bdevs": 2, 00:19:56.269 "num_base_bdevs_discovered": 1, 00:19:56.269 "num_base_bdevs_operational": 2, 00:19:56.269 "base_bdevs_list": [ 00:19:56.269 { 00:19:56.269 "name": "pt1", 00:19:56.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.269 "is_configured": true, 00:19:56.269 "data_offset": 256, 00:19:56.269 "data_size": 7936 00:19:56.269 }, 00:19:56.269 { 00:19:56.269 "name": null, 00:19:56.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.269 "is_configured": false, 00:19:56.269 "data_offset": 256, 00:19:56.269 "data_size": 7936 00:19:56.269 } 00:19:56.269 ] 00:19:56.269 }' 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.269 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.835 [2024-10-08 16:27:49.894271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.835 [2024-10-08 16:27:49.894378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.835 [2024-10-08 16:27:49.894410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:56.835 [2024-10-08 16:27:49.894429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.835 [2024-10-08 16:27:49.895111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.835 [2024-10-08 16:27:49.895155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.835 [2024-10-08 16:27:49.895259] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:56.835 [2024-10-08 16:27:49.895295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.835 [2024-10-08 16:27:49.895458] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:56.835 [2024-10-08 16:27:49.895477] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:56.835 [2024-10-08 16:27:49.895813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:56.835 [2024-10-08 16:27:49.896031] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:56.835 [2024-10-08 16:27:49.896055] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:56.835 [2024-10-08 16:27:49.896222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.835 pt2 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.835 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.835 "name": "raid_bdev1", 00:19:56.835 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:56.835 "strip_size_kb": 0, 00:19:56.835 "state": "online", 00:19:56.835 "raid_level": "raid1", 00:19:56.835 "superblock": true, 00:19:56.835 "num_base_bdevs": 2, 00:19:56.835 "num_base_bdevs_discovered": 2, 00:19:56.835 "num_base_bdevs_operational": 2, 00:19:56.835 "base_bdevs_list": [ 00:19:56.835 { 00:19:56.835 "name": "pt1", 00:19:56.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.835 "is_configured": true, 00:19:56.835 "data_offset": 256, 00:19:56.835 "data_size": 7936 00:19:56.835 }, 00:19:56.835 { 00:19:56.835 "name": "pt2", 00:19:56.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.835 "is_configured": true, 00:19:56.835 "data_offset": 256, 00:19:56.836 "data_size": 7936 00:19:56.836 } 00:19:56.836 ] 00:19:56.836 }' 00:19:56.836 16:27:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.836 16:27:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.094 [2024-10-08 16:27:50.394742] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.094 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.352 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.352 "name": "raid_bdev1", 00:19:57.352 "aliases": [ 00:19:57.352 "808f0889-f8f4-4661-8eab-0067856da22f" 00:19:57.352 ], 00:19:57.352 "product_name": "Raid Volume", 00:19:57.352 "block_size": 4096, 00:19:57.352 "num_blocks": 7936, 00:19:57.352 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:57.352 "assigned_rate_limits": { 00:19:57.352 "rw_ios_per_sec": 0, 00:19:57.352 "rw_mbytes_per_sec": 0, 00:19:57.352 "r_mbytes_per_sec": 0, 00:19:57.352 "w_mbytes_per_sec": 0 00:19:57.352 }, 00:19:57.352 "claimed": false, 00:19:57.353 "zoned": false, 00:19:57.353 "supported_io_types": { 00:19:57.353 "read": true, 00:19:57.353 "write": true, 00:19:57.353 "unmap": false, 00:19:57.353 "flush": false, 00:19:57.353 "reset": true, 00:19:57.353 "nvme_admin": false, 00:19:57.353 "nvme_io": false, 00:19:57.353 "nvme_io_md": false, 00:19:57.353 "write_zeroes": true, 00:19:57.353 "zcopy": false, 00:19:57.353 "get_zone_info": false, 00:19:57.353 "zone_management": false, 00:19:57.353 "zone_append": false, 00:19:57.353 "compare": false, 00:19:57.353 "compare_and_write": false, 00:19:57.353 "abort": false, 00:19:57.353 "seek_hole": false, 00:19:57.353 "seek_data": false, 00:19:57.353 "copy": false, 00:19:57.353 "nvme_iov_md": false 00:19:57.353 }, 00:19:57.353 "memory_domains": [ 00:19:57.353 { 00:19:57.353 "dma_device_id": "system", 00:19:57.353 "dma_device_type": 1 00:19:57.353 }, 00:19:57.353 { 00:19:57.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.353 "dma_device_type": 2 00:19:57.353 }, 00:19:57.353 { 00:19:57.353 "dma_device_id": "system", 00:19:57.353 "dma_device_type": 1 00:19:57.353 }, 00:19:57.353 { 00:19:57.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.353 "dma_device_type": 2 00:19:57.353 } 00:19:57.353 ], 00:19:57.353 "driver_specific": { 00:19:57.353 "raid": { 00:19:57.353 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:57.353 "strip_size_kb": 0, 00:19:57.353 "state": "online", 00:19:57.353 "raid_level": "raid1", 00:19:57.353 "superblock": true, 00:19:57.353 "num_base_bdevs": 2, 00:19:57.353 "num_base_bdevs_discovered": 2, 00:19:57.353 "num_base_bdevs_operational": 2, 00:19:57.353 "base_bdevs_list": [ 00:19:57.353 { 00:19:57.353 "name": "pt1", 00:19:57.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.353 "is_configured": true, 00:19:57.353 "data_offset": 256, 00:19:57.353 "data_size": 7936 00:19:57.353 }, 00:19:57.353 { 00:19:57.353 "name": "pt2", 00:19:57.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.353 "is_configured": true, 00:19:57.353 "data_offset": 256, 00:19:57.353 "data_size": 7936 00:19:57.353 } 00:19:57.353 ] 00:19:57.353 } 00:19:57.353 } 00:19:57.353 }' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:57.353 pt2' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.353 [2024-10-08 16:27:50.638769] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.353 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 808f0889-f8f4-4661-8eab-0067856da22f '!=' 808f0889-f8f4-4661-8eab-0067856da22f ']' 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 [2024-10-08 16:27:50.686545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.611 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.612 "name": "raid_bdev1", 00:19:57.612 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:57.612 "strip_size_kb": 0, 00:19:57.612 "state": "online", 00:19:57.612 "raid_level": "raid1", 00:19:57.612 "superblock": true, 00:19:57.612 "num_base_bdevs": 2, 00:19:57.612 "num_base_bdevs_discovered": 1, 00:19:57.612 "num_base_bdevs_operational": 1, 00:19:57.612 "base_bdevs_list": [ 00:19:57.612 { 00:19:57.612 "name": null, 00:19:57.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.612 "is_configured": false, 00:19:57.612 "data_offset": 0, 00:19:57.612 "data_size": 7936 00:19:57.612 }, 00:19:57.612 { 00:19:57.612 "name": "pt2", 00:19:57.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.612 "is_configured": true, 00:19:57.612 "data_offset": 256, 00:19:57.612 "data_size": 7936 00:19:57.612 } 00:19:57.612 ] 00:19:57.612 }' 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.612 16:27:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.194 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.194 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.194 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.194 [2024-10-08 16:27:51.202699] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.195 [2024-10-08 16:27:51.202754] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.195 [2024-10-08 16:27:51.202868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.195 [2024-10-08 16:27:51.202930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.195 [2024-10-08 16:27:51.202971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 [2024-10-08 16:27:51.278690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.195 [2024-10-08 16:27:51.278777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.195 [2024-10-08 16:27:51.278802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:58.195 [2024-10-08 16:27:51.278819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.195 [2024-10-08 16:27:51.281746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.195 [2024-10-08 16:27:51.281795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.195 [2024-10-08 16:27:51.281905] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.195 [2024-10-08 16:27:51.281987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.195 [2024-10-08 16:27:51.282116] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:58.195 [2024-10-08 16:27:51.282137] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:58.195 [2024-10-08 16:27:51.282433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:58.195 [2024-10-08 16:27:51.282646] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:58.195 [2024-10-08 16:27:51.282663] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:58.195 [2024-10-08 16:27:51.282897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.195 pt2 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.195 "name": "raid_bdev1", 00:19:58.195 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:58.195 "strip_size_kb": 0, 00:19:58.195 "state": "online", 00:19:58.195 "raid_level": "raid1", 00:19:58.195 "superblock": true, 00:19:58.195 "num_base_bdevs": 2, 00:19:58.195 "num_base_bdevs_discovered": 1, 00:19:58.195 "num_base_bdevs_operational": 1, 00:19:58.195 "base_bdevs_list": [ 00:19:58.195 { 00:19:58.195 "name": null, 00:19:58.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.195 "is_configured": false, 00:19:58.195 "data_offset": 256, 00:19:58.195 "data_size": 7936 00:19:58.195 }, 00:19:58.195 { 00:19:58.195 "name": "pt2", 00:19:58.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.195 "is_configured": true, 00:19:58.195 "data_offset": 256, 00:19:58.195 "data_size": 7936 00:19:58.195 } 00:19:58.195 ] 00:19:58.195 }' 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.195 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.761 [2024-10-08 16:27:51.790988] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.761 [2024-10-08 16:27:51.791046] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.761 [2024-10-08 16:27:51.791152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.761 [2024-10-08 16:27:51.791243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.761 [2024-10-08 16:27:51.791257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.761 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.761 [2024-10-08 16:27:51.850983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:58.761 [2024-10-08 16:27:51.851052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.761 [2024-10-08 16:27:51.851112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:58.761 [2024-10-08 16:27:51.851127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.761 [2024-10-08 16:27:51.854000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.761 [2024-10-08 16:27:51.854053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:58.761 [2024-10-08 16:27:51.854154] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:58.761 [2024-10-08 16:27:51.854211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:58.761 [2024-10-08 16:27:51.854383] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:58.761 [2024-10-08 16:27:51.854400] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.761 [2024-10-08 16:27:51.854423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:58.761 [2024-10-08 16:27:51.854491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.762 [2024-10-08 16:27:51.854611] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:58.762 [2024-10-08 16:27:51.854627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:58.762 [2024-10-08 16:27:51.854900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:58.762 [2024-10-08 16:27:51.855103] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:58.762 [2024-10-08 16:27:51.855130] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:58.762 [2024-10-08 16:27:51.855365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.762 pt1 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.762 "name": "raid_bdev1", 00:19:58.762 "uuid": "808f0889-f8f4-4661-8eab-0067856da22f", 00:19:58.762 "strip_size_kb": 0, 00:19:58.762 "state": "online", 00:19:58.762 "raid_level": "raid1", 00:19:58.762 "superblock": true, 00:19:58.762 "num_base_bdevs": 2, 00:19:58.762 "num_base_bdevs_discovered": 1, 00:19:58.762 "num_base_bdevs_operational": 1, 00:19:58.762 "base_bdevs_list": [ 00:19:58.762 { 00:19:58.762 "name": null, 00:19:58.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.762 "is_configured": false, 00:19:58.762 "data_offset": 256, 00:19:58.762 "data_size": 7936 00:19:58.762 }, 00:19:58.762 { 00:19:58.762 "name": "pt2", 00:19:58.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.762 "is_configured": true, 00:19:58.762 "data_offset": 256, 00:19:58.762 "data_size": 7936 00:19:58.762 } 00:19:58.762 ] 00:19:58.762 }' 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.762 16:27:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:59.329 [2024-10-08 16:27:52.419761] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 808f0889-f8f4-4661-8eab-0067856da22f '!=' 808f0889-f8f4-4661-8eab-0067856da22f ']' 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86977 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86977 ']' 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86977 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86977 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.329 killing process with pid 86977 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86977' 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86977 00:19:59.329 [2024-10-08 16:27:52.501808] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.329 16:27:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86977 00:19:59.329 [2024-10-08 16:27:52.501951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.329 [2024-10-08 16:27:52.502012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.329 [2024-10-08 16:27:52.502039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:59.587 [2024-10-08 16:27:52.688261] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.970 16:27:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:00.970 00:20:00.970 real 0m6.728s 00:20:00.970 user 0m10.478s 00:20:00.970 sys 0m0.992s 00:20:00.970 16:27:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.970 16:27:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 ************************************ 00:20:00.970 END TEST raid_superblock_test_4k 00:20:00.970 ************************************ 00:20:00.970 16:27:53 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:00.970 16:27:53 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:00.970 16:27:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:00.970 16:27:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.970 16:27:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 ************************************ 00:20:00.970 START TEST raid_rebuild_test_sb_4k 00:20:00.970 ************************************ 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:00.970 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87306 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87306 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 87306 ']' 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.971 16:27:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.971 [2024-10-08 16:27:54.097814] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:00.971 [2024-10-08 16:27:54.098245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87306 ] 00:20:00.971 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.971 Zero copy mechanism will not be used. 00:20:00.971 [2024-10-08 16:27:54.278299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.230 [2024-10-08 16:27:54.549178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.488 [2024-10-08 16:27:54.755627] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.488 [2024-10-08 16:27:54.755681] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.746 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.746 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:20:01.746 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:01.746 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:01.746 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.746 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 BaseBdev1_malloc 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 [2024-10-08 16:27:55.102939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.006 [2024-10-08 16:27:55.103047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.006 [2024-10-08 16:27:55.103090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:02.006 [2024-10-08 16:27:55.103143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.006 [2024-10-08 16:27:55.106029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.006 [2024-10-08 16:27:55.106077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.006 BaseBdev1 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 BaseBdev2_malloc 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 [2024-10-08 16:27:55.170876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:02.006 [2024-10-08 16:27:55.170972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.006 [2024-10-08 16:27:55.171002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:02.006 [2024-10-08 16:27:55.171023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.006 [2024-10-08 16:27:55.173783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.006 [2024-10-08 16:27:55.174056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.006 BaseBdev2 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 spare_malloc 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 spare_delay 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 [2024-10-08 16:27:55.227150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:02.006 [2024-10-08 16:27:55.227249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.006 [2024-10-08 16:27:55.227278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:02.006 [2024-10-08 16:27:55.227297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.006 [2024-10-08 16:27:55.230113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.006 [2024-10-08 16:27:55.230164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:02.006 spare 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 [2024-10-08 16:27:55.235229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.006 [2024-10-08 16:27:55.237619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.006 [2024-10-08 16:27:55.237850] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:02.006 [2024-10-08 16:27:55.237873] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:02.006 [2024-10-08 16:27:55.238202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:02.006 [2024-10-08 16:27:55.238422] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:02.006 [2024-10-08 16:27:55.238438] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:02.006 [2024-10-08 16:27:55.238675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.006 "name": "raid_bdev1", 00:20:02.006 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:02.006 "strip_size_kb": 0, 00:20:02.006 "state": "online", 00:20:02.006 "raid_level": "raid1", 00:20:02.006 "superblock": true, 00:20:02.006 "num_base_bdevs": 2, 00:20:02.006 "num_base_bdevs_discovered": 2, 00:20:02.006 "num_base_bdevs_operational": 2, 00:20:02.006 "base_bdevs_list": [ 00:20:02.006 { 00:20:02.006 "name": "BaseBdev1", 00:20:02.006 "uuid": "c87458b7-57af-5796-b54e-3e7dfd6496a0", 00:20:02.006 "is_configured": true, 00:20:02.006 "data_offset": 256, 00:20:02.006 "data_size": 7936 00:20:02.006 }, 00:20:02.006 { 00:20:02.006 "name": "BaseBdev2", 00:20:02.006 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:02.006 "is_configured": true, 00:20:02.006 "data_offset": 256, 00:20:02.006 "data_size": 7936 00:20:02.006 } 00:20:02.006 ] 00:20:02.006 }' 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.006 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.573 [2024-10-08 16:27:55.755747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:02.573 16:27:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:03.138 [2024-10-08 16:27:56.171630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:03.138 /dev/nbd0 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.138 1+0 records in 00:20:03.138 1+0 records out 00:20:03.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064325 s, 6.4 MB/s 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:03.138 16:27:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:04.073 7936+0 records in 00:20:04.073 7936+0 records out 00:20:04.073 32505856 bytes (33 MB, 31 MiB) copied, 0.926039 s, 35.1 MB/s 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.073 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:04.331 [2024-10-08 16:27:57.474161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.331 [2024-10-08 16:27:57.489941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.331 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.332 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.332 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.332 "name": "raid_bdev1", 00:20:04.332 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:04.332 "strip_size_kb": 0, 00:20:04.332 "state": "online", 00:20:04.332 "raid_level": "raid1", 00:20:04.332 "superblock": true, 00:20:04.332 "num_base_bdevs": 2, 00:20:04.332 "num_base_bdevs_discovered": 1, 00:20:04.332 "num_base_bdevs_operational": 1, 00:20:04.332 "base_bdevs_list": [ 00:20:04.332 { 00:20:04.332 "name": null, 00:20:04.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.332 "is_configured": false, 00:20:04.332 "data_offset": 0, 00:20:04.332 "data_size": 7936 00:20:04.332 }, 00:20:04.332 { 00:20:04.332 "name": "BaseBdev2", 00:20:04.332 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:04.332 "is_configured": true, 00:20:04.332 "data_offset": 256, 00:20:04.332 "data_size": 7936 00:20:04.332 } 00:20:04.332 ] 00:20:04.332 }' 00:20:04.332 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.332 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.898 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:04.898 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.898 16:27:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.898 [2024-10-08 16:27:58.002215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:04.898 [2024-10-08 16:27:58.018240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:04.898 16:27:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.898 16:27:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:04.898 [2024-10-08 16:27:58.020745] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.832 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.833 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.833 "name": "raid_bdev1", 00:20:05.833 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:05.833 "strip_size_kb": 0, 00:20:05.833 "state": "online", 00:20:05.833 "raid_level": "raid1", 00:20:05.833 "superblock": true, 00:20:05.833 "num_base_bdevs": 2, 00:20:05.833 "num_base_bdevs_discovered": 2, 00:20:05.833 "num_base_bdevs_operational": 2, 00:20:05.833 "process": { 00:20:05.833 "type": "rebuild", 00:20:05.833 "target": "spare", 00:20:05.833 "progress": { 00:20:05.833 "blocks": 2560, 00:20:05.833 "percent": 32 00:20:05.833 } 00:20:05.833 }, 00:20:05.833 "base_bdevs_list": [ 00:20:05.833 { 00:20:05.833 "name": "spare", 00:20:05.833 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:05.833 "is_configured": true, 00:20:05.833 "data_offset": 256, 00:20:05.833 "data_size": 7936 00:20:05.833 }, 00:20:05.833 { 00:20:05.833 "name": "BaseBdev2", 00:20:05.833 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:05.833 "is_configured": true, 00:20:05.833 "data_offset": 256, 00:20:05.833 "data_size": 7936 00:20:05.833 } 00:20:05.833 ] 00:20:05.833 }' 00:20:05.833 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.833 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.833 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.091 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.091 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:06.091 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.091 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.091 [2024-10-08 16:27:59.190113] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:06.091 [2024-10-08 16:27:59.230115] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:06.092 [2024-10-08 16:27:59.230204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.092 [2024-10-08 16:27:59.230226] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:06.092 [2024-10-08 16:27:59.230240] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.092 "name": "raid_bdev1", 00:20:06.092 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:06.092 "strip_size_kb": 0, 00:20:06.092 "state": "online", 00:20:06.092 "raid_level": "raid1", 00:20:06.092 "superblock": true, 00:20:06.092 "num_base_bdevs": 2, 00:20:06.092 "num_base_bdevs_discovered": 1, 00:20:06.092 "num_base_bdevs_operational": 1, 00:20:06.092 "base_bdevs_list": [ 00:20:06.092 { 00:20:06.092 "name": null, 00:20:06.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.092 "is_configured": false, 00:20:06.092 "data_offset": 0, 00:20:06.092 "data_size": 7936 00:20:06.092 }, 00:20:06.092 { 00:20:06.092 "name": "BaseBdev2", 00:20:06.092 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:06.092 "is_configured": true, 00:20:06.092 "data_offset": 256, 00:20:06.092 "data_size": 7936 00:20:06.092 } 00:20:06.092 ] 00:20:06.092 }' 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.092 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.660 "name": "raid_bdev1", 00:20:06.660 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:06.660 "strip_size_kb": 0, 00:20:06.660 "state": "online", 00:20:06.660 "raid_level": "raid1", 00:20:06.660 "superblock": true, 00:20:06.660 "num_base_bdevs": 2, 00:20:06.660 "num_base_bdevs_discovered": 1, 00:20:06.660 "num_base_bdevs_operational": 1, 00:20:06.660 "base_bdevs_list": [ 00:20:06.660 { 00:20:06.660 "name": null, 00:20:06.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.660 "is_configured": false, 00:20:06.660 "data_offset": 0, 00:20:06.660 "data_size": 7936 00:20:06.660 }, 00:20:06.660 { 00:20:06.660 "name": "BaseBdev2", 00:20:06.660 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:06.660 "is_configured": true, 00:20:06.660 "data_offset": 256, 00:20:06.660 "data_size": 7936 00:20:06.660 } 00:20:06.660 ] 00:20:06.660 }' 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.660 [2024-10-08 16:27:59.951412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.660 [2024-10-08 16:27:59.966084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.660 16:27:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:06.660 [2024-10-08 16:27:59.968631] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.036 16:28:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.036 "name": "raid_bdev1", 00:20:08.036 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:08.036 "strip_size_kb": 0, 00:20:08.036 "state": "online", 00:20:08.036 "raid_level": "raid1", 00:20:08.036 "superblock": true, 00:20:08.036 "num_base_bdevs": 2, 00:20:08.036 "num_base_bdevs_discovered": 2, 00:20:08.036 "num_base_bdevs_operational": 2, 00:20:08.036 "process": { 00:20:08.036 "type": "rebuild", 00:20:08.036 "target": "spare", 00:20:08.036 "progress": { 00:20:08.036 "blocks": 2560, 00:20:08.036 "percent": 32 00:20:08.036 } 00:20:08.036 }, 00:20:08.036 "base_bdevs_list": [ 00:20:08.036 { 00:20:08.036 "name": "spare", 00:20:08.036 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:08.036 "is_configured": true, 00:20:08.036 "data_offset": 256, 00:20:08.036 "data_size": 7936 00:20:08.036 }, 00:20:08.036 { 00:20:08.036 "name": "BaseBdev2", 00:20:08.036 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:08.036 "is_configured": true, 00:20:08.036 "data_offset": 256, 00:20:08.036 "data_size": 7936 00:20:08.036 } 00:20:08.036 ] 00:20:08.036 }' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:08.036 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=751 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.036 "name": "raid_bdev1", 00:20:08.036 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:08.036 "strip_size_kb": 0, 00:20:08.036 "state": "online", 00:20:08.036 "raid_level": "raid1", 00:20:08.036 "superblock": true, 00:20:08.036 "num_base_bdevs": 2, 00:20:08.036 "num_base_bdevs_discovered": 2, 00:20:08.036 "num_base_bdevs_operational": 2, 00:20:08.036 "process": { 00:20:08.036 "type": "rebuild", 00:20:08.036 "target": "spare", 00:20:08.036 "progress": { 00:20:08.036 "blocks": 2816, 00:20:08.036 "percent": 35 00:20:08.036 } 00:20:08.036 }, 00:20:08.036 "base_bdevs_list": [ 00:20:08.036 { 00:20:08.036 "name": "spare", 00:20:08.036 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:08.036 "is_configured": true, 00:20:08.036 "data_offset": 256, 00:20:08.036 "data_size": 7936 00:20:08.036 }, 00:20:08.036 { 00:20:08.036 "name": "BaseBdev2", 00:20:08.036 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:08.036 "is_configured": true, 00:20:08.036 "data_offset": 256, 00:20:08.036 "data_size": 7936 00:20:08.036 } 00:20:08.036 ] 00:20:08.036 }' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.036 16:28:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.412 "name": "raid_bdev1", 00:20:09.412 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:09.412 "strip_size_kb": 0, 00:20:09.412 "state": "online", 00:20:09.412 "raid_level": "raid1", 00:20:09.412 "superblock": true, 00:20:09.412 "num_base_bdevs": 2, 00:20:09.412 "num_base_bdevs_discovered": 2, 00:20:09.412 "num_base_bdevs_operational": 2, 00:20:09.412 "process": { 00:20:09.412 "type": "rebuild", 00:20:09.412 "target": "spare", 00:20:09.412 "progress": { 00:20:09.412 "blocks": 5888, 00:20:09.412 "percent": 74 00:20:09.412 } 00:20:09.412 }, 00:20:09.412 "base_bdevs_list": [ 00:20:09.412 { 00:20:09.412 "name": "spare", 00:20:09.412 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:09.412 "is_configured": true, 00:20:09.412 "data_offset": 256, 00:20:09.412 "data_size": 7936 00:20:09.412 }, 00:20:09.412 { 00:20:09.412 "name": "BaseBdev2", 00:20:09.412 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:09.412 "is_configured": true, 00:20:09.412 "data_offset": 256, 00:20:09.412 "data_size": 7936 00:20:09.412 } 00:20:09.412 ] 00:20:09.412 }' 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.412 16:28:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.980 [2024-10-08 16:28:03.093389] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:09.980 [2024-10-08 16:28:03.093847] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:09.980 [2024-10-08 16:28:03.094068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.239 "name": "raid_bdev1", 00:20:10.239 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:10.239 "strip_size_kb": 0, 00:20:10.239 "state": "online", 00:20:10.239 "raid_level": "raid1", 00:20:10.239 "superblock": true, 00:20:10.239 "num_base_bdevs": 2, 00:20:10.239 "num_base_bdevs_discovered": 2, 00:20:10.239 "num_base_bdevs_operational": 2, 00:20:10.239 "base_bdevs_list": [ 00:20:10.239 { 00:20:10.239 "name": "spare", 00:20:10.239 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:10.239 "is_configured": true, 00:20:10.239 "data_offset": 256, 00:20:10.239 "data_size": 7936 00:20:10.239 }, 00:20:10.239 { 00:20:10.239 "name": "BaseBdev2", 00:20:10.239 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:10.239 "is_configured": true, 00:20:10.239 "data_offset": 256, 00:20:10.239 "data_size": 7936 00:20:10.239 } 00:20:10.239 ] 00:20:10.239 }' 00:20:10.239 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.498 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:10.498 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.498 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:10.498 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:10.498 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.499 "name": "raid_bdev1", 00:20:10.499 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:10.499 "strip_size_kb": 0, 00:20:10.499 "state": "online", 00:20:10.499 "raid_level": "raid1", 00:20:10.499 "superblock": true, 00:20:10.499 "num_base_bdevs": 2, 00:20:10.499 "num_base_bdevs_discovered": 2, 00:20:10.499 "num_base_bdevs_operational": 2, 00:20:10.499 "base_bdevs_list": [ 00:20:10.499 { 00:20:10.499 "name": "spare", 00:20:10.499 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:10.499 "is_configured": true, 00:20:10.499 "data_offset": 256, 00:20:10.499 "data_size": 7936 00:20:10.499 }, 00:20:10.499 { 00:20:10.499 "name": "BaseBdev2", 00:20:10.499 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:10.499 "is_configured": true, 00:20:10.499 "data_offset": 256, 00:20:10.499 "data_size": 7936 00:20:10.499 } 00:20:10.499 ] 00:20:10.499 }' 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.499 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.758 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.758 "name": "raid_bdev1", 00:20:10.758 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:10.758 "strip_size_kb": 0, 00:20:10.758 "state": "online", 00:20:10.758 "raid_level": "raid1", 00:20:10.758 "superblock": true, 00:20:10.758 "num_base_bdevs": 2, 00:20:10.758 "num_base_bdevs_discovered": 2, 00:20:10.758 "num_base_bdevs_operational": 2, 00:20:10.758 "base_bdevs_list": [ 00:20:10.758 { 00:20:10.758 "name": "spare", 00:20:10.758 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:10.758 "is_configured": true, 00:20:10.758 "data_offset": 256, 00:20:10.758 "data_size": 7936 00:20:10.758 }, 00:20:10.758 { 00:20:10.758 "name": "BaseBdev2", 00:20:10.758 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:10.758 "is_configured": true, 00:20:10.758 "data_offset": 256, 00:20:10.758 "data_size": 7936 00:20:10.758 } 00:20:10.758 ] 00:20:10.758 }' 00:20:10.758 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.758 16:28:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.017 [2024-10-08 16:28:04.296813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.017 [2024-10-08 16:28:04.297084] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.017 [2024-10-08 16:28:04.297240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.017 [2024-10-08 16:28:04.297340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.017 [2024-10-08 16:28:04.297357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:11.017 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.282 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:11.282 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:11.282 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:11.283 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:11.551 /dev/nbd0 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.551 1+0 records in 00:20:11.551 1+0 records out 00:20:11.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039585 s, 10.3 MB/s 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:11.551 16:28:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:11.810 /dev/nbd1 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.810 1+0 records in 00:20:11.810 1+0 records out 00:20:11.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480571 s, 8.5 MB/s 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:11.810 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.069 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.327 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.586 [2024-10-08 16:28:05.836557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:12.586 [2024-10-08 16:28:05.837205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.586 [2024-10-08 16:28:05.837262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:12.586 [2024-10-08 16:28:05.837281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.586 [2024-10-08 16:28:05.840270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.586 [2024-10-08 16:28:05.840317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:12.586 [2024-10-08 16:28:05.840452] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:12.586 [2024-10-08 16:28:05.840571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.586 [2024-10-08 16:28:05.840789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.586 spare 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.586 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.845 [2024-10-08 16:28:05.940983] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:12.845 [2024-10-08 16:28:05.941060] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:12.845 [2024-10-08 16:28:05.941507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:12.845 [2024-10-08 16:28:05.941817] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:12.845 [2024-10-08 16:28:05.941835] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:12.845 [2024-10-08 16:28:05.942077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.845 16:28:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.845 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.845 "name": "raid_bdev1", 00:20:12.845 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:12.845 "strip_size_kb": 0, 00:20:12.845 "state": "online", 00:20:12.845 "raid_level": "raid1", 00:20:12.845 "superblock": true, 00:20:12.845 "num_base_bdevs": 2, 00:20:12.845 "num_base_bdevs_discovered": 2, 00:20:12.845 "num_base_bdevs_operational": 2, 00:20:12.845 "base_bdevs_list": [ 00:20:12.845 { 00:20:12.845 "name": "spare", 00:20:12.845 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:12.845 "is_configured": true, 00:20:12.845 "data_offset": 256, 00:20:12.845 "data_size": 7936 00:20:12.845 }, 00:20:12.845 { 00:20:12.845 "name": "BaseBdev2", 00:20:12.845 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:12.845 "is_configured": true, 00:20:12.845 "data_offset": 256, 00:20:12.845 "data_size": 7936 00:20:12.845 } 00:20:12.845 ] 00:20:12.845 }' 00:20:12.845 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.845 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.104 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.104 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.104 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.104 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.104 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.362 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.362 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.363 "name": "raid_bdev1", 00:20:13.363 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:13.363 "strip_size_kb": 0, 00:20:13.363 "state": "online", 00:20:13.363 "raid_level": "raid1", 00:20:13.363 "superblock": true, 00:20:13.363 "num_base_bdevs": 2, 00:20:13.363 "num_base_bdevs_discovered": 2, 00:20:13.363 "num_base_bdevs_operational": 2, 00:20:13.363 "base_bdevs_list": [ 00:20:13.363 { 00:20:13.363 "name": "spare", 00:20:13.363 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:13.363 "is_configured": true, 00:20:13.363 "data_offset": 256, 00:20:13.363 "data_size": 7936 00:20:13.363 }, 00:20:13.363 { 00:20:13.363 "name": "BaseBdev2", 00:20:13.363 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:13.363 "is_configured": true, 00:20:13.363 "data_offset": 256, 00:20:13.363 "data_size": 7936 00:20:13.363 } 00:20:13.363 ] 00:20:13.363 }' 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.363 [2024-10-08 16:28:06.637162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.363 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.621 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.622 "name": "raid_bdev1", 00:20:13.622 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:13.622 "strip_size_kb": 0, 00:20:13.622 "state": "online", 00:20:13.622 "raid_level": "raid1", 00:20:13.622 "superblock": true, 00:20:13.622 "num_base_bdevs": 2, 00:20:13.622 "num_base_bdevs_discovered": 1, 00:20:13.622 "num_base_bdevs_operational": 1, 00:20:13.622 "base_bdevs_list": [ 00:20:13.622 { 00:20:13.622 "name": null, 00:20:13.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.622 "is_configured": false, 00:20:13.622 "data_offset": 0, 00:20:13.622 "data_size": 7936 00:20:13.622 }, 00:20:13.622 { 00:20:13.622 "name": "BaseBdev2", 00:20:13.622 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:13.622 "is_configured": true, 00:20:13.622 "data_offset": 256, 00:20:13.622 "data_size": 7936 00:20:13.622 } 00:20:13.622 ] 00:20:13.622 }' 00:20:13.622 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.622 16:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.880 16:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.880 16:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.880 16:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.880 [2024-10-08 16:28:07.157328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.880 [2024-10-08 16:28:07.157617] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:13.880 [2024-10-08 16:28:07.157644] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:13.880 [2024-10-08 16:28:07.157691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.880 [2024-10-08 16:28:07.172336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:13.880 16:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.880 16:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:13.880 [2024-10-08 16:28:07.174906] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.288 "name": "raid_bdev1", 00:20:15.288 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:15.288 "strip_size_kb": 0, 00:20:15.288 "state": "online", 00:20:15.288 "raid_level": "raid1", 00:20:15.288 "superblock": true, 00:20:15.288 "num_base_bdevs": 2, 00:20:15.288 "num_base_bdevs_discovered": 2, 00:20:15.288 "num_base_bdevs_operational": 2, 00:20:15.288 "process": { 00:20:15.288 "type": "rebuild", 00:20:15.288 "target": "spare", 00:20:15.288 "progress": { 00:20:15.288 "blocks": 2560, 00:20:15.288 "percent": 32 00:20:15.288 } 00:20:15.288 }, 00:20:15.288 "base_bdevs_list": [ 00:20:15.288 { 00:20:15.288 "name": "spare", 00:20:15.288 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:15.288 "is_configured": true, 00:20:15.288 "data_offset": 256, 00:20:15.288 "data_size": 7936 00:20:15.288 }, 00:20:15.288 { 00:20:15.288 "name": "BaseBdev2", 00:20:15.288 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:15.288 "is_configured": true, 00:20:15.288 "data_offset": 256, 00:20:15.288 "data_size": 7936 00:20:15.288 } 00:20:15.288 ] 00:20:15.288 }' 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.288 [2024-10-08 16:28:08.345025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.288 [2024-10-08 16:28:08.384972] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.288 [2024-10-08 16:28:08.385048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.288 [2024-10-08 16:28:08.385071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.288 [2024-10-08 16:28:08.385085] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.288 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.289 "name": "raid_bdev1", 00:20:15.289 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:15.289 "strip_size_kb": 0, 00:20:15.289 "state": "online", 00:20:15.289 "raid_level": "raid1", 00:20:15.289 "superblock": true, 00:20:15.289 "num_base_bdevs": 2, 00:20:15.289 "num_base_bdevs_discovered": 1, 00:20:15.289 "num_base_bdevs_operational": 1, 00:20:15.289 "base_bdevs_list": [ 00:20:15.289 { 00:20:15.289 "name": null, 00:20:15.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.289 "is_configured": false, 00:20:15.289 "data_offset": 0, 00:20:15.289 "data_size": 7936 00:20:15.289 }, 00:20:15.289 { 00:20:15.289 "name": "BaseBdev2", 00:20:15.289 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:15.289 "is_configured": true, 00:20:15.289 "data_offset": 256, 00:20:15.289 "data_size": 7936 00:20:15.289 } 00:20:15.289 ] 00:20:15.289 }' 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.289 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.859 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:15.859 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.859 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.859 [2024-10-08 16:28:08.970203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:15.859 [2024-10-08 16:28:08.970323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.859 [2024-10-08 16:28:08.970356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:15.859 [2024-10-08 16:28:08.970375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.859 [2024-10-08 16:28:08.971114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.859 [2024-10-08 16:28:08.971152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:15.859 [2024-10-08 16:28:08.971316] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:15.859 [2024-10-08 16:28:08.971342] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:15.859 [2024-10-08 16:28:08.971356] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:15.859 [2024-10-08 16:28:08.971387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.859 [2024-10-08 16:28:08.987712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:15.859 spare 00:20:15.859 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.859 16:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:15.859 [2024-10-08 16:28:08.990527] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.794 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.794 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.794 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.794 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.794 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.795 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.795 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.795 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.795 16:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.795 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.795 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.795 "name": "raid_bdev1", 00:20:16.795 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:16.795 "strip_size_kb": 0, 00:20:16.795 "state": "online", 00:20:16.795 "raid_level": "raid1", 00:20:16.795 "superblock": true, 00:20:16.795 "num_base_bdevs": 2, 00:20:16.795 "num_base_bdevs_discovered": 2, 00:20:16.795 "num_base_bdevs_operational": 2, 00:20:16.795 "process": { 00:20:16.795 "type": "rebuild", 00:20:16.795 "target": "spare", 00:20:16.795 "progress": { 00:20:16.795 "blocks": 2560, 00:20:16.795 "percent": 32 00:20:16.795 } 00:20:16.795 }, 00:20:16.795 "base_bdevs_list": [ 00:20:16.795 { 00:20:16.795 "name": "spare", 00:20:16.795 "uuid": "477861e3-c81c-541d-934f-0523517da8f0", 00:20:16.795 "is_configured": true, 00:20:16.795 "data_offset": 256, 00:20:16.795 "data_size": 7936 00:20:16.795 }, 00:20:16.795 { 00:20:16.795 "name": "BaseBdev2", 00:20:16.795 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:16.795 "is_configured": true, 00:20:16.795 "data_offset": 256, 00:20:16.795 "data_size": 7936 00:20:16.795 } 00:20:16.795 ] 00:20:16.795 }' 00:20:16.795 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.053 [2024-10-08 16:28:10.200749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.053 [2024-10-08 16:28:10.201330] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:17.053 [2024-10-08 16:28:10.201391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.053 [2024-10-08 16:28:10.201414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.053 [2024-10-08 16:28:10.201425] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.053 "name": "raid_bdev1", 00:20:17.053 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:17.053 "strip_size_kb": 0, 00:20:17.053 "state": "online", 00:20:17.053 "raid_level": "raid1", 00:20:17.053 "superblock": true, 00:20:17.053 "num_base_bdevs": 2, 00:20:17.053 "num_base_bdevs_discovered": 1, 00:20:17.053 "num_base_bdevs_operational": 1, 00:20:17.053 "base_bdevs_list": [ 00:20:17.053 { 00:20:17.053 "name": null, 00:20:17.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.053 "is_configured": false, 00:20:17.053 "data_offset": 0, 00:20:17.053 "data_size": 7936 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "name": "BaseBdev2", 00:20:17.053 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:17.053 "is_configured": true, 00:20:17.053 "data_offset": 256, 00:20:17.053 "data_size": 7936 00:20:17.053 } 00:20:17.053 ] 00:20:17.053 }' 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.053 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.620 "name": "raid_bdev1", 00:20:17.620 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:17.620 "strip_size_kb": 0, 00:20:17.620 "state": "online", 00:20:17.620 "raid_level": "raid1", 00:20:17.620 "superblock": true, 00:20:17.620 "num_base_bdevs": 2, 00:20:17.620 "num_base_bdevs_discovered": 1, 00:20:17.620 "num_base_bdevs_operational": 1, 00:20:17.620 "base_bdevs_list": [ 00:20:17.620 { 00:20:17.620 "name": null, 00:20:17.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.620 "is_configured": false, 00:20:17.620 "data_offset": 0, 00:20:17.620 "data_size": 7936 00:20:17.620 }, 00:20:17.620 { 00:20:17.620 "name": "BaseBdev2", 00:20:17.620 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:17.620 "is_configured": true, 00:20:17.620 "data_offset": 256, 00:20:17.620 "data_size": 7936 00:20:17.620 } 00:20:17.620 ] 00:20:17.620 }' 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 [2024-10-08 16:28:10.921024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:17.620 [2024-10-08 16:28:10.921094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.620 [2024-10-08 16:28:10.921139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:17.620 [2024-10-08 16:28:10.921154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.620 [2024-10-08 16:28:10.921792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.620 [2024-10-08 16:28:10.921826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:17.620 [2024-10-08 16:28:10.921988] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:17.620 [2024-10-08 16:28:10.922009] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:17.620 [2024-10-08 16:28:10.922049] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:17.620 [2024-10-08 16:28:10.922063] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:17.620 BaseBdev1 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 16:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.023 "name": "raid_bdev1", 00:20:19.023 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:19.023 "strip_size_kb": 0, 00:20:19.023 "state": "online", 00:20:19.023 "raid_level": "raid1", 00:20:19.023 "superblock": true, 00:20:19.023 "num_base_bdevs": 2, 00:20:19.023 "num_base_bdevs_discovered": 1, 00:20:19.023 "num_base_bdevs_operational": 1, 00:20:19.023 "base_bdevs_list": [ 00:20:19.023 { 00:20:19.023 "name": null, 00:20:19.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.023 "is_configured": false, 00:20:19.023 "data_offset": 0, 00:20:19.023 "data_size": 7936 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "name": "BaseBdev2", 00:20:19.023 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:19.023 "is_configured": true, 00:20:19.023 "data_offset": 256, 00:20:19.023 "data_size": 7936 00:20:19.023 } 00:20:19.023 ] 00:20:19.023 }' 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.023 16:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.282 "name": "raid_bdev1", 00:20:19.282 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:19.282 "strip_size_kb": 0, 00:20:19.282 "state": "online", 00:20:19.282 "raid_level": "raid1", 00:20:19.282 "superblock": true, 00:20:19.282 "num_base_bdevs": 2, 00:20:19.282 "num_base_bdevs_discovered": 1, 00:20:19.282 "num_base_bdevs_operational": 1, 00:20:19.282 "base_bdevs_list": [ 00:20:19.282 { 00:20:19.282 "name": null, 00:20:19.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.282 "is_configured": false, 00:20:19.282 "data_offset": 0, 00:20:19.282 "data_size": 7936 00:20:19.282 }, 00:20:19.282 { 00:20:19.282 "name": "BaseBdev2", 00:20:19.282 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:19.282 "is_configured": true, 00:20:19.282 "data_offset": 256, 00:20:19.282 "data_size": 7936 00:20:19.282 } 00:20:19.282 ] 00:20:19.282 }' 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.282 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.540 [2024-10-08 16:28:12.625635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:19.540 [2024-10-08 16:28:12.625820] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.540 [2024-10-08 16:28:12.625844] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:19.540 request: 00:20:19.540 { 00:20:19.540 "base_bdev": "BaseBdev1", 00:20:19.540 "raid_bdev": "raid_bdev1", 00:20:19.540 "method": "bdev_raid_add_base_bdev", 00:20:19.540 "req_id": 1 00:20:19.540 } 00:20:19.540 Got JSON-RPC error response 00:20:19.540 response: 00:20:19.540 { 00:20:19.540 "code": -22, 00:20:19.540 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:19.540 } 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.540 16:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.477 "name": "raid_bdev1", 00:20:20.477 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:20.477 "strip_size_kb": 0, 00:20:20.477 "state": "online", 00:20:20.477 "raid_level": "raid1", 00:20:20.477 "superblock": true, 00:20:20.477 "num_base_bdevs": 2, 00:20:20.477 "num_base_bdevs_discovered": 1, 00:20:20.477 "num_base_bdevs_operational": 1, 00:20:20.477 "base_bdevs_list": [ 00:20:20.477 { 00:20:20.477 "name": null, 00:20:20.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.477 "is_configured": false, 00:20:20.477 "data_offset": 0, 00:20:20.477 "data_size": 7936 00:20:20.477 }, 00:20:20.477 { 00:20:20.477 "name": "BaseBdev2", 00:20:20.477 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:20.477 "is_configured": true, 00:20:20.477 "data_offset": 256, 00:20:20.477 "data_size": 7936 00:20:20.477 } 00:20:20.477 ] 00:20:20.477 }' 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.477 16:28:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.044 "name": "raid_bdev1", 00:20:21.044 "uuid": "23ad491d-83e7-47f2-ae17-46a10d825c73", 00:20:21.044 "strip_size_kb": 0, 00:20:21.044 "state": "online", 00:20:21.044 "raid_level": "raid1", 00:20:21.044 "superblock": true, 00:20:21.044 "num_base_bdevs": 2, 00:20:21.044 "num_base_bdevs_discovered": 1, 00:20:21.044 "num_base_bdevs_operational": 1, 00:20:21.044 "base_bdevs_list": [ 00:20:21.044 { 00:20:21.044 "name": null, 00:20:21.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.044 "is_configured": false, 00:20:21.044 "data_offset": 0, 00:20:21.044 "data_size": 7936 00:20:21.044 }, 00:20:21.044 { 00:20:21.044 "name": "BaseBdev2", 00:20:21.044 "uuid": "abe7fe2c-2108-5cad-9673-b8c7018e025b", 00:20:21.044 "is_configured": true, 00:20:21.044 "data_offset": 256, 00:20:21.044 "data_size": 7936 00:20:21.044 } 00:20:21.044 ] 00:20:21.044 }' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87306 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 87306 ']' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 87306 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87306 00:20:21.044 killing process with pid 87306 00:20:21.044 Received shutdown signal, test time was about 60.000000 seconds 00:20:21.044 00:20:21.044 Latency(us) 00:20:21.044 [2024-10-08T16:28:14.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.044 [2024-10-08T16:28:14.366Z] =================================================================================================================== 00:20:21.044 [2024-10-08T16:28:14.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87306' 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 87306 00:20:21.044 [2024-10-08 16:28:14.337619] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.044 16:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 87306 00:20:21.044 [2024-10-08 16:28:14.337775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.044 [2024-10-08 16:28:14.337874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.044 [2024-10-08 16:28:14.337894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:21.303 [2024-10-08 16:28:14.598749] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:22.687 ************************************ 00:20:22.687 END TEST raid_rebuild_test_sb_4k 00:20:22.687 ************************************ 00:20:22.687 16:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:22.687 00:20:22.687 real 0m21.761s 00:20:22.687 user 0m29.460s 00:20:22.687 sys 0m2.661s 00:20:22.687 16:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.687 16:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 16:28:15 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:22.687 16:28:15 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:22.687 16:28:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:22.687 16:28:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.687 16:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 ************************************ 00:20:22.687 START TEST raid_state_function_test_sb_md_separate 00:20:22.687 ************************************ 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88015 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:22.687 Process raid pid: 88015 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88015' 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88015 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88015 ']' 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.687 16:28:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 [2024-10-08 16:28:15.892848] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:22.687 [2024-10-08 16:28:15.892996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.945 [2024-10-08 16:28:16.055670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.203 [2024-10-08 16:28:16.286301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.203 [2024-10-08 16:28:16.483585] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.203 [2024-10-08 16:28:16.483645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.770 [2024-10-08 16:28:16.966589] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:23.770 [2024-10-08 16:28:16.966646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:23.770 [2024-10-08 16:28:16.966662] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.770 [2024-10-08 16:28:16.966680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.770 16:28:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.770 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.770 "name": "Existed_Raid", 00:20:23.770 "uuid": "5607434d-d3f4-4ca0-8195-03d5a377cd46", 00:20:23.770 "strip_size_kb": 0, 00:20:23.770 "state": "configuring", 00:20:23.770 "raid_level": "raid1", 00:20:23.770 "superblock": true, 00:20:23.770 "num_base_bdevs": 2, 00:20:23.770 "num_base_bdevs_discovered": 0, 00:20:23.770 "num_base_bdevs_operational": 2, 00:20:23.770 "base_bdevs_list": [ 00:20:23.770 { 00:20:23.770 "name": "BaseBdev1", 00:20:23.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.770 "is_configured": false, 00:20:23.770 "data_offset": 0, 00:20:23.770 "data_size": 0 00:20:23.770 }, 00:20:23.770 { 00:20:23.770 "name": "BaseBdev2", 00:20:23.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.770 "is_configured": false, 00:20:23.770 "data_offset": 0, 00:20:23.770 "data_size": 0 00:20:23.770 } 00:20:23.770 ] 00:20:23.770 }' 00:20:23.770 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.770 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.336 [2024-10-08 16:28:17.474678] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:24.336 [2024-10-08 16:28:17.474726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.336 [2024-10-08 16:28:17.482664] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.336 [2024-10-08 16:28:17.482720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.336 [2024-10-08 16:28:17.482735] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.336 [2024-10-08 16:28:17.482754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:24.336 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.337 [2024-10-08 16:28:17.540315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.337 BaseBdev1 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.337 [ 00:20:24.337 { 00:20:24.337 "name": "BaseBdev1", 00:20:24.337 "aliases": [ 00:20:24.337 "69d4bb0b-6594-4255-8686-8531432946c8" 00:20:24.337 ], 00:20:24.337 "product_name": "Malloc disk", 00:20:24.337 "block_size": 4096, 00:20:24.337 "num_blocks": 8192, 00:20:24.337 "uuid": "69d4bb0b-6594-4255-8686-8531432946c8", 00:20:24.337 "md_size": 32, 00:20:24.337 "md_interleave": false, 00:20:24.337 "dif_type": 0, 00:20:24.337 "assigned_rate_limits": { 00:20:24.337 "rw_ios_per_sec": 0, 00:20:24.337 "rw_mbytes_per_sec": 0, 00:20:24.337 "r_mbytes_per_sec": 0, 00:20:24.337 "w_mbytes_per_sec": 0 00:20:24.337 }, 00:20:24.337 "claimed": true, 00:20:24.337 "claim_type": "exclusive_write", 00:20:24.337 "zoned": false, 00:20:24.337 "supported_io_types": { 00:20:24.337 "read": true, 00:20:24.337 "write": true, 00:20:24.337 "unmap": true, 00:20:24.337 "flush": true, 00:20:24.337 "reset": true, 00:20:24.337 "nvme_admin": false, 00:20:24.337 "nvme_io": false, 00:20:24.337 "nvme_io_md": false, 00:20:24.337 "write_zeroes": true, 00:20:24.337 "zcopy": true, 00:20:24.337 "get_zone_info": false, 00:20:24.337 "zone_management": false, 00:20:24.337 "zone_append": false, 00:20:24.337 "compare": false, 00:20:24.337 "compare_and_write": false, 00:20:24.337 "abort": true, 00:20:24.337 "seek_hole": false, 00:20:24.337 "seek_data": false, 00:20:24.337 "copy": true, 00:20:24.337 "nvme_iov_md": false 00:20:24.337 }, 00:20:24.337 "memory_domains": [ 00:20:24.337 { 00:20:24.337 "dma_device_id": "system", 00:20:24.337 "dma_device_type": 1 00:20:24.337 }, 00:20:24.337 { 00:20:24.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.337 "dma_device_type": 2 00:20:24.337 } 00:20:24.337 ], 00:20:24.337 "driver_specific": {} 00:20:24.337 } 00:20:24.337 ] 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.337 "name": "Existed_Raid", 00:20:24.337 "uuid": "0cbb134a-c1c3-4feb-a486-ac404467a461", 00:20:24.337 "strip_size_kb": 0, 00:20:24.337 "state": "configuring", 00:20:24.337 "raid_level": "raid1", 00:20:24.337 "superblock": true, 00:20:24.337 "num_base_bdevs": 2, 00:20:24.337 "num_base_bdevs_discovered": 1, 00:20:24.337 "num_base_bdevs_operational": 2, 00:20:24.337 "base_bdevs_list": [ 00:20:24.337 { 00:20:24.337 "name": "BaseBdev1", 00:20:24.337 "uuid": "69d4bb0b-6594-4255-8686-8531432946c8", 00:20:24.337 "is_configured": true, 00:20:24.337 "data_offset": 256, 00:20:24.337 "data_size": 7936 00:20:24.337 }, 00:20:24.337 { 00:20:24.337 "name": "BaseBdev2", 00:20:24.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.337 "is_configured": false, 00:20:24.337 "data_offset": 0, 00:20:24.337 "data_size": 0 00:20:24.337 } 00:20:24.337 ] 00:20:24.337 }' 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.337 16:28:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.908 [2024-10-08 16:28:18.084593] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:24.908 [2024-10-08 16:28:18.084668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.908 [2024-10-08 16:28:18.096672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.908 [2024-10-08 16:28:18.099378] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.908 [2024-10-08 16:28:18.099434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.908 "name": "Existed_Raid", 00:20:24.908 "uuid": "70c95b6c-5311-4b60-95e7-66ae3a2970a2", 00:20:24.908 "strip_size_kb": 0, 00:20:24.908 "state": "configuring", 00:20:24.908 "raid_level": "raid1", 00:20:24.908 "superblock": true, 00:20:24.908 "num_base_bdevs": 2, 00:20:24.908 "num_base_bdevs_discovered": 1, 00:20:24.908 "num_base_bdevs_operational": 2, 00:20:24.908 "base_bdevs_list": [ 00:20:24.908 { 00:20:24.908 "name": "BaseBdev1", 00:20:24.908 "uuid": "69d4bb0b-6594-4255-8686-8531432946c8", 00:20:24.908 "is_configured": true, 00:20:24.908 "data_offset": 256, 00:20:24.908 "data_size": 7936 00:20:24.908 }, 00:20:24.908 { 00:20:24.908 "name": "BaseBdev2", 00:20:24.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.908 "is_configured": false, 00:20:24.908 "data_offset": 0, 00:20:24.908 "data_size": 0 00:20:24.908 } 00:20:24.908 ] 00:20:24.908 }' 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.908 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.497 [2024-10-08 16:28:18.664199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.497 [2024-10-08 16:28:18.664734] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:25.497 [2024-10-08 16:28:18.664874] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:25.497 [2024-10-08 16:28:18.665022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:25.497 BaseBdev2 00:20:25.497 [2024-10-08 16:28:18.665290] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:25.497 [2024-10-08 16:28:18.665314] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:25.497 [2024-10-08 16:28:18.665433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:25.497 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.498 [ 00:20:25.498 { 00:20:25.498 "name": "BaseBdev2", 00:20:25.498 "aliases": [ 00:20:25.498 "7382f0f3-717b-4234-8b40-da420eca4a10" 00:20:25.498 ], 00:20:25.498 "product_name": "Malloc disk", 00:20:25.498 "block_size": 4096, 00:20:25.498 "num_blocks": 8192, 00:20:25.498 "uuid": "7382f0f3-717b-4234-8b40-da420eca4a10", 00:20:25.498 "md_size": 32, 00:20:25.498 "md_interleave": false, 00:20:25.498 "dif_type": 0, 00:20:25.498 "assigned_rate_limits": { 00:20:25.498 "rw_ios_per_sec": 0, 00:20:25.498 "rw_mbytes_per_sec": 0, 00:20:25.498 "r_mbytes_per_sec": 0, 00:20:25.498 "w_mbytes_per_sec": 0 00:20:25.498 }, 00:20:25.498 "claimed": true, 00:20:25.498 "claim_type": "exclusive_write", 00:20:25.498 "zoned": false, 00:20:25.498 "supported_io_types": { 00:20:25.498 "read": true, 00:20:25.498 "write": true, 00:20:25.498 "unmap": true, 00:20:25.498 "flush": true, 00:20:25.498 "reset": true, 00:20:25.498 "nvme_admin": false, 00:20:25.498 "nvme_io": false, 00:20:25.498 "nvme_io_md": false, 00:20:25.498 "write_zeroes": true, 00:20:25.498 "zcopy": true, 00:20:25.498 "get_zone_info": false, 00:20:25.498 "zone_management": false, 00:20:25.498 "zone_append": false, 00:20:25.498 "compare": false, 00:20:25.498 "compare_and_write": false, 00:20:25.498 "abort": true, 00:20:25.498 "seek_hole": false, 00:20:25.498 "seek_data": false, 00:20:25.498 "copy": true, 00:20:25.498 "nvme_iov_md": false 00:20:25.498 }, 00:20:25.498 "memory_domains": [ 00:20:25.498 { 00:20:25.498 "dma_device_id": "system", 00:20:25.498 "dma_device_type": 1 00:20:25.498 }, 00:20:25.498 { 00:20:25.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.498 "dma_device_type": 2 00:20:25.498 } 00:20:25.498 ], 00:20:25.498 "driver_specific": {} 00:20:25.498 } 00:20:25.498 ] 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.498 "name": "Existed_Raid", 00:20:25.498 "uuid": "70c95b6c-5311-4b60-95e7-66ae3a2970a2", 00:20:25.498 "strip_size_kb": 0, 00:20:25.498 "state": "online", 00:20:25.498 "raid_level": "raid1", 00:20:25.498 "superblock": true, 00:20:25.498 "num_base_bdevs": 2, 00:20:25.498 "num_base_bdevs_discovered": 2, 00:20:25.498 "num_base_bdevs_operational": 2, 00:20:25.498 "base_bdevs_list": [ 00:20:25.498 { 00:20:25.498 "name": "BaseBdev1", 00:20:25.498 "uuid": "69d4bb0b-6594-4255-8686-8531432946c8", 00:20:25.498 "is_configured": true, 00:20:25.498 "data_offset": 256, 00:20:25.498 "data_size": 7936 00:20:25.498 }, 00:20:25.498 { 00:20:25.498 "name": "BaseBdev2", 00:20:25.498 "uuid": "7382f0f3-717b-4234-8b40-da420eca4a10", 00:20:25.498 "is_configured": true, 00:20:25.498 "data_offset": 256, 00:20:25.498 "data_size": 7936 00:20:25.498 } 00:20:25.498 ] 00:20:25.498 }' 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.498 16:28:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:26.065 [2024-10-08 16:28:19.240917] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:26.065 "name": "Existed_Raid", 00:20:26.065 "aliases": [ 00:20:26.065 "70c95b6c-5311-4b60-95e7-66ae3a2970a2" 00:20:26.065 ], 00:20:26.065 "product_name": "Raid Volume", 00:20:26.065 "block_size": 4096, 00:20:26.065 "num_blocks": 7936, 00:20:26.065 "uuid": "70c95b6c-5311-4b60-95e7-66ae3a2970a2", 00:20:26.065 "md_size": 32, 00:20:26.065 "md_interleave": false, 00:20:26.065 "dif_type": 0, 00:20:26.065 "assigned_rate_limits": { 00:20:26.065 "rw_ios_per_sec": 0, 00:20:26.065 "rw_mbytes_per_sec": 0, 00:20:26.065 "r_mbytes_per_sec": 0, 00:20:26.065 "w_mbytes_per_sec": 0 00:20:26.065 }, 00:20:26.065 "claimed": false, 00:20:26.065 "zoned": false, 00:20:26.065 "supported_io_types": { 00:20:26.065 "read": true, 00:20:26.065 "write": true, 00:20:26.065 "unmap": false, 00:20:26.065 "flush": false, 00:20:26.065 "reset": true, 00:20:26.065 "nvme_admin": false, 00:20:26.065 "nvme_io": false, 00:20:26.065 "nvme_io_md": false, 00:20:26.065 "write_zeroes": true, 00:20:26.065 "zcopy": false, 00:20:26.065 "get_zone_info": false, 00:20:26.065 "zone_management": false, 00:20:26.065 "zone_append": false, 00:20:26.065 "compare": false, 00:20:26.065 "compare_and_write": false, 00:20:26.065 "abort": false, 00:20:26.065 "seek_hole": false, 00:20:26.065 "seek_data": false, 00:20:26.065 "copy": false, 00:20:26.065 "nvme_iov_md": false 00:20:26.065 }, 00:20:26.065 "memory_domains": [ 00:20:26.065 { 00:20:26.065 "dma_device_id": "system", 00:20:26.065 "dma_device_type": 1 00:20:26.065 }, 00:20:26.065 { 00:20:26.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.065 "dma_device_type": 2 00:20:26.065 }, 00:20:26.065 { 00:20:26.065 "dma_device_id": "system", 00:20:26.065 "dma_device_type": 1 00:20:26.065 }, 00:20:26.065 { 00:20:26.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.065 "dma_device_type": 2 00:20:26.065 } 00:20:26.065 ], 00:20:26.065 "driver_specific": { 00:20:26.065 "raid": { 00:20:26.065 "uuid": "70c95b6c-5311-4b60-95e7-66ae3a2970a2", 00:20:26.065 "strip_size_kb": 0, 00:20:26.065 "state": "online", 00:20:26.065 "raid_level": "raid1", 00:20:26.065 "superblock": true, 00:20:26.065 "num_base_bdevs": 2, 00:20:26.065 "num_base_bdevs_discovered": 2, 00:20:26.065 "num_base_bdevs_operational": 2, 00:20:26.065 "base_bdevs_list": [ 00:20:26.065 { 00:20:26.065 "name": "BaseBdev1", 00:20:26.065 "uuid": "69d4bb0b-6594-4255-8686-8531432946c8", 00:20:26.065 "is_configured": true, 00:20:26.065 "data_offset": 256, 00:20:26.065 "data_size": 7936 00:20:26.065 }, 00:20:26.065 { 00:20:26.065 "name": "BaseBdev2", 00:20:26.065 "uuid": "7382f0f3-717b-4234-8b40-da420eca4a10", 00:20:26.065 "is_configured": true, 00:20:26.065 "data_offset": 256, 00:20:26.065 "data_size": 7936 00:20:26.065 } 00:20:26.065 ] 00:20:26.065 } 00:20:26.065 } 00:20:26.065 }' 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:26.065 BaseBdev2' 00:20:26.065 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.323 [2024-10-08 16:28:19.504617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:26.323 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.324 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.582 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.582 "name": "Existed_Raid", 00:20:26.582 "uuid": "70c95b6c-5311-4b60-95e7-66ae3a2970a2", 00:20:26.582 "strip_size_kb": 0, 00:20:26.582 "state": "online", 00:20:26.582 "raid_level": "raid1", 00:20:26.582 "superblock": true, 00:20:26.582 "num_base_bdevs": 2, 00:20:26.582 "num_base_bdevs_discovered": 1, 00:20:26.582 "num_base_bdevs_operational": 1, 00:20:26.582 "base_bdevs_list": [ 00:20:26.582 { 00:20:26.582 "name": null, 00:20:26.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.582 "is_configured": false, 00:20:26.582 "data_offset": 0, 00:20:26.582 "data_size": 7936 00:20:26.582 }, 00:20:26.582 { 00:20:26.582 "name": "BaseBdev2", 00:20:26.582 "uuid": "7382f0f3-717b-4234-8b40-da420eca4a10", 00:20:26.582 "is_configured": true, 00:20:26.582 "data_offset": 256, 00:20:26.582 "data_size": 7936 00:20:26.582 } 00:20:26.582 ] 00:20:26.582 }' 00:20:26.582 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.582 16:28:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.841 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.841 [2024-10-08 16:28:20.161548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:26.841 [2024-10-08 16:28:20.161673] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:27.099 [2024-10-08 16:28:20.250572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.099 [2024-10-08 16:28:20.250642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.099 [2024-10-08 16:28:20.250662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88015 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88015 ']' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88015 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88015 00:20:27.099 killing process with pid 88015 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88015' 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88015 00:20:27.099 [2024-10-08 16:28:20.347541] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.099 16:28:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88015 00:20:27.099 [2024-10-08 16:28:20.361755] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.487 16:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:28.487 00:20:28.487 real 0m5.738s 00:20:28.487 user 0m8.571s 00:20:28.487 sys 0m0.853s 00:20:28.487 16:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.487 16:28:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.487 ************************************ 00:20:28.487 END TEST raid_state_function_test_sb_md_separate 00:20:28.487 ************************************ 00:20:28.487 16:28:21 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:28.487 16:28:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:28.487 16:28:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.487 16:28:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.487 ************************************ 00:20:28.487 START TEST raid_superblock_test_md_separate 00:20:28.487 ************************************ 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:28.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88263 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88263 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88263 ']' 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.487 16:28:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.487 [2024-10-08 16:28:21.704634] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:28.487 [2024-10-08 16:28:21.705139] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88263 ] 00:20:28.745 [2024-10-08 16:28:21.878915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.003 [2024-10-08 16:28:22.171449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.262 [2024-10-08 16:28:22.414188] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.262 [2024-10-08 16:28:22.414252] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.521 malloc1 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.521 [2024-10-08 16:28:22.686815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:29.521 [2024-10-08 16:28:22.686921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.521 [2024-10-08 16:28:22.686963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:29.521 [2024-10-08 16:28:22.686979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.521 [2024-10-08 16:28:22.690357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.521 [2024-10-08 16:28:22.690409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:29.521 pt1 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.521 malloc2 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.521 [2024-10-08 16:28:22.770141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:29.521 [2024-10-08 16:28:22.770253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.521 [2024-10-08 16:28:22.770292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:29.521 [2024-10-08 16:28:22.770308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.521 [2024-10-08 16:28:22.773534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.521 [2024-10-08 16:28:22.773798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:29.521 pt2 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.521 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.521 [2024-10-08 16:28:22.782284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:29.521 [2024-10-08 16:28:22.785285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.521 [2024-10-08 16:28:22.785746] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:29.521 [2024-10-08 16:28:22.785775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:29.522 [2024-10-08 16:28:22.785948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:29.522 [2024-10-08 16:28:22.786142] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:29.522 [2024-10-08 16:28:22.786161] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:29.522 [2024-10-08 16:28:22.786359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.522 "name": "raid_bdev1", 00:20:29.522 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:29.522 "strip_size_kb": 0, 00:20:29.522 "state": "online", 00:20:29.522 "raid_level": "raid1", 00:20:29.522 "superblock": true, 00:20:29.522 "num_base_bdevs": 2, 00:20:29.522 "num_base_bdevs_discovered": 2, 00:20:29.522 "num_base_bdevs_operational": 2, 00:20:29.522 "base_bdevs_list": [ 00:20:29.522 { 00:20:29.522 "name": "pt1", 00:20:29.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:29.522 "is_configured": true, 00:20:29.522 "data_offset": 256, 00:20:29.522 "data_size": 7936 00:20:29.522 }, 00:20:29.522 { 00:20:29.522 "name": "pt2", 00:20:29.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.522 "is_configured": true, 00:20:29.522 "data_offset": 256, 00:20:29.522 "data_size": 7936 00:20:29.522 } 00:20:29.522 ] 00:20:29.522 }' 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.522 16:28:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:30.088 [2024-10-08 16:28:23.318978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:30.088 "name": "raid_bdev1", 00:20:30.088 "aliases": [ 00:20:30.088 "8b82ed53-e4a4-4c6d-8830-69bdd917733c" 00:20:30.088 ], 00:20:30.088 "product_name": "Raid Volume", 00:20:30.088 "block_size": 4096, 00:20:30.088 "num_blocks": 7936, 00:20:30.088 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:30.088 "md_size": 32, 00:20:30.088 "md_interleave": false, 00:20:30.088 "dif_type": 0, 00:20:30.088 "assigned_rate_limits": { 00:20:30.088 "rw_ios_per_sec": 0, 00:20:30.088 "rw_mbytes_per_sec": 0, 00:20:30.088 "r_mbytes_per_sec": 0, 00:20:30.088 "w_mbytes_per_sec": 0 00:20:30.088 }, 00:20:30.088 "claimed": false, 00:20:30.088 "zoned": false, 00:20:30.088 "supported_io_types": { 00:20:30.088 "read": true, 00:20:30.088 "write": true, 00:20:30.088 "unmap": false, 00:20:30.088 "flush": false, 00:20:30.088 "reset": true, 00:20:30.088 "nvme_admin": false, 00:20:30.088 "nvme_io": false, 00:20:30.088 "nvme_io_md": false, 00:20:30.088 "write_zeroes": true, 00:20:30.088 "zcopy": false, 00:20:30.088 "get_zone_info": false, 00:20:30.088 "zone_management": false, 00:20:30.088 "zone_append": false, 00:20:30.088 "compare": false, 00:20:30.088 "compare_and_write": false, 00:20:30.088 "abort": false, 00:20:30.088 "seek_hole": false, 00:20:30.088 "seek_data": false, 00:20:30.088 "copy": false, 00:20:30.088 "nvme_iov_md": false 00:20:30.088 }, 00:20:30.088 "memory_domains": [ 00:20:30.088 { 00:20:30.088 "dma_device_id": "system", 00:20:30.088 "dma_device_type": 1 00:20:30.088 }, 00:20:30.088 { 00:20:30.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.088 "dma_device_type": 2 00:20:30.088 }, 00:20:30.088 { 00:20:30.088 "dma_device_id": "system", 00:20:30.088 "dma_device_type": 1 00:20:30.088 }, 00:20:30.088 { 00:20:30.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.088 "dma_device_type": 2 00:20:30.088 } 00:20:30.088 ], 00:20:30.088 "driver_specific": { 00:20:30.088 "raid": { 00:20:30.088 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:30.088 "strip_size_kb": 0, 00:20:30.088 "state": "online", 00:20:30.088 "raid_level": "raid1", 00:20:30.088 "superblock": true, 00:20:30.088 "num_base_bdevs": 2, 00:20:30.088 "num_base_bdevs_discovered": 2, 00:20:30.088 "num_base_bdevs_operational": 2, 00:20:30.088 "base_bdevs_list": [ 00:20:30.088 { 00:20:30.088 "name": "pt1", 00:20:30.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.088 "is_configured": true, 00:20:30.088 "data_offset": 256, 00:20:30.088 "data_size": 7936 00:20:30.088 }, 00:20:30.088 { 00:20:30.088 "name": "pt2", 00:20:30.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.088 "is_configured": true, 00:20:30.088 "data_offset": 256, 00:20:30.088 "data_size": 7936 00:20:30.088 } 00:20:30.088 ] 00:20:30.088 } 00:20:30.088 } 00:20:30.088 }' 00:20:30.088 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:30.346 pt2' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:30.346 [2024-10-08 16:28:23.598999] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8b82ed53-e4a4-4c6d-8830-69bdd917733c 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8b82ed53-e4a4-4c6d-8830-69bdd917733c ']' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 [2024-10-08 16:28:23.646649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.346 [2024-10-08 16:28:23.646683] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.346 [2024-10-08 16:28:23.646800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.346 [2024-10-08 16:28:23.646897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.346 [2024-10-08 16:28:23.646918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 [2024-10-08 16:28:23.770750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:30.605 [2024-10-08 16:28:23.773469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:30.605 [2024-10-08 16:28:23.773734] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:30.605 [2024-10-08 16:28:23.773984] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:30.605 [2024-10-08 16:28:23.774140] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.605 [2024-10-08 16:28:23.774282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:30.605 request: 00:20:30.605 { 00:20:30.605 "name": "raid_bdev1", 00:20:30.605 "raid_level": "raid1", 00:20:30.605 "base_bdevs": [ 00:20:30.605 "malloc1", 00:20:30.605 "malloc2" 00:20:30.605 ], 00:20:30.605 "superblock": false, 00:20:30.605 "method": "bdev_raid_create", 00:20:30.605 "req_id": 1 00:20:30.605 } 00:20:30.605 Got JSON-RPC error response 00:20:30.605 response: 00:20:30.605 { 00:20:30.605 "code": -17, 00:20:30.605 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:30.605 } 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.605 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 [2024-10-08 16:28:23.842706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:30.605 [2024-10-08 16:28:23.842810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.605 [2024-10-08 16:28:23.842840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:30.606 [2024-10-08 16:28:23.842860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.606 [2024-10-08 16:28:23.845810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.606 [2024-10-08 16:28:23.845870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:30.606 [2024-10-08 16:28:23.845956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:30.606 [2024-10-08 16:28:23.846040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:30.606 pt1 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.606 "name": "raid_bdev1", 00:20:30.606 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:30.606 "strip_size_kb": 0, 00:20:30.606 "state": "configuring", 00:20:30.606 "raid_level": "raid1", 00:20:30.606 "superblock": true, 00:20:30.606 "num_base_bdevs": 2, 00:20:30.606 "num_base_bdevs_discovered": 1, 00:20:30.606 "num_base_bdevs_operational": 2, 00:20:30.606 "base_bdevs_list": [ 00:20:30.606 { 00:20:30.606 "name": "pt1", 00:20:30.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.606 "is_configured": true, 00:20:30.606 "data_offset": 256, 00:20:30.606 "data_size": 7936 00:20:30.606 }, 00:20:30.606 { 00:20:30.606 "name": null, 00:20:30.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.606 "is_configured": false, 00:20:30.606 "data_offset": 256, 00:20:30.606 "data_size": 7936 00:20:30.606 } 00:20:30.606 ] 00:20:30.606 }' 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.606 16:28:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.172 [2024-10-08 16:28:24.378880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:31.172 [2024-10-08 16:28:24.379001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.172 [2024-10-08 16:28:24.379034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:31.172 [2024-10-08 16:28:24.379068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.172 [2024-10-08 16:28:24.379415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.172 [2024-10-08 16:28:24.379446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:31.172 [2024-10-08 16:28:24.379580] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:31.172 [2024-10-08 16:28:24.379621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.172 [2024-10-08 16:28:24.379782] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:31.172 [2024-10-08 16:28:24.379804] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:31.172 [2024-10-08 16:28:24.379911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:31.172 [2024-10-08 16:28:24.380059] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:31.172 [2024-10-08 16:28:24.380090] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:31.172 [2024-10-08 16:28:24.380220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.172 pt2 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.172 "name": "raid_bdev1", 00:20:31.172 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:31.172 "strip_size_kb": 0, 00:20:31.172 "state": "online", 00:20:31.172 "raid_level": "raid1", 00:20:31.172 "superblock": true, 00:20:31.172 "num_base_bdevs": 2, 00:20:31.172 "num_base_bdevs_discovered": 2, 00:20:31.172 "num_base_bdevs_operational": 2, 00:20:31.172 "base_bdevs_list": [ 00:20:31.172 { 00:20:31.172 "name": "pt1", 00:20:31.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.172 "is_configured": true, 00:20:31.172 "data_offset": 256, 00:20:31.172 "data_size": 7936 00:20:31.172 }, 00:20:31.172 { 00:20:31.172 "name": "pt2", 00:20:31.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.172 "is_configured": true, 00:20:31.172 "data_offset": 256, 00:20:31.172 "data_size": 7936 00:20:31.172 } 00:20:31.172 ] 00:20:31.172 }' 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.172 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.752 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.753 [2024-10-08 16:28:24.895451] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.753 "name": "raid_bdev1", 00:20:31.753 "aliases": [ 00:20:31.753 "8b82ed53-e4a4-4c6d-8830-69bdd917733c" 00:20:31.753 ], 00:20:31.753 "product_name": "Raid Volume", 00:20:31.753 "block_size": 4096, 00:20:31.753 "num_blocks": 7936, 00:20:31.753 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:31.753 "md_size": 32, 00:20:31.753 "md_interleave": false, 00:20:31.753 "dif_type": 0, 00:20:31.753 "assigned_rate_limits": { 00:20:31.753 "rw_ios_per_sec": 0, 00:20:31.753 "rw_mbytes_per_sec": 0, 00:20:31.753 "r_mbytes_per_sec": 0, 00:20:31.753 "w_mbytes_per_sec": 0 00:20:31.753 }, 00:20:31.753 "claimed": false, 00:20:31.753 "zoned": false, 00:20:31.753 "supported_io_types": { 00:20:31.753 "read": true, 00:20:31.753 "write": true, 00:20:31.753 "unmap": false, 00:20:31.753 "flush": false, 00:20:31.753 "reset": true, 00:20:31.753 "nvme_admin": false, 00:20:31.753 "nvme_io": false, 00:20:31.753 "nvme_io_md": false, 00:20:31.753 "write_zeroes": true, 00:20:31.753 "zcopy": false, 00:20:31.753 "get_zone_info": false, 00:20:31.753 "zone_management": false, 00:20:31.753 "zone_append": false, 00:20:31.753 "compare": false, 00:20:31.753 "compare_and_write": false, 00:20:31.753 "abort": false, 00:20:31.753 "seek_hole": false, 00:20:31.753 "seek_data": false, 00:20:31.753 "copy": false, 00:20:31.753 "nvme_iov_md": false 00:20:31.753 }, 00:20:31.753 "memory_domains": [ 00:20:31.753 { 00:20:31.753 "dma_device_id": "system", 00:20:31.753 "dma_device_type": 1 00:20:31.753 }, 00:20:31.753 { 00:20:31.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.753 "dma_device_type": 2 00:20:31.753 }, 00:20:31.753 { 00:20:31.753 "dma_device_id": "system", 00:20:31.753 "dma_device_type": 1 00:20:31.753 }, 00:20:31.753 { 00:20:31.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.753 "dma_device_type": 2 00:20:31.753 } 00:20:31.753 ], 00:20:31.753 "driver_specific": { 00:20:31.753 "raid": { 00:20:31.753 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:31.753 "strip_size_kb": 0, 00:20:31.753 "state": "online", 00:20:31.753 "raid_level": "raid1", 00:20:31.753 "superblock": true, 00:20:31.753 "num_base_bdevs": 2, 00:20:31.753 "num_base_bdevs_discovered": 2, 00:20:31.753 "num_base_bdevs_operational": 2, 00:20:31.753 "base_bdevs_list": [ 00:20:31.753 { 00:20:31.753 "name": "pt1", 00:20:31.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.753 "is_configured": true, 00:20:31.753 "data_offset": 256, 00:20:31.753 "data_size": 7936 00:20:31.753 }, 00:20:31.753 { 00:20:31.753 "name": "pt2", 00:20:31.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.753 "is_configured": true, 00:20:31.753 "data_offset": 256, 00:20:31.753 "data_size": 7936 00:20:31.753 } 00:20:31.753 ] 00:20:31.753 } 00:20:31.753 } 00:20:31.753 }' 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:31.753 pt2' 00:20:31.753 16:28:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.753 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:31.753 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.753 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:31.753 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.753 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.753 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:32.011 [2024-10-08 16:28:25.183462] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8b82ed53-e4a4-4c6d-8830-69bdd917733c '!=' 8b82ed53-e4a4-4c6d-8830-69bdd917733c ']' 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.011 [2024-10-08 16:28:25.235212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.011 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.011 "name": "raid_bdev1", 00:20:32.011 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:32.011 "strip_size_kb": 0, 00:20:32.011 "state": "online", 00:20:32.011 "raid_level": "raid1", 00:20:32.011 "superblock": true, 00:20:32.011 "num_base_bdevs": 2, 00:20:32.011 "num_base_bdevs_discovered": 1, 00:20:32.012 "num_base_bdevs_operational": 1, 00:20:32.012 "base_bdevs_list": [ 00:20:32.012 { 00:20:32.012 "name": null, 00:20:32.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.012 "is_configured": false, 00:20:32.012 "data_offset": 0, 00:20:32.012 "data_size": 7936 00:20:32.012 }, 00:20:32.012 { 00:20:32.012 "name": "pt2", 00:20:32.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.012 "is_configured": true, 00:20:32.012 "data_offset": 256, 00:20:32.012 "data_size": 7936 00:20:32.012 } 00:20:32.012 ] 00:20:32.012 }' 00:20:32.012 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.012 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 [2024-10-08 16:28:25.755332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:32.579 [2024-10-08 16:28:25.755381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.579 [2024-10-08 16:28:25.755496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.579 [2024-10-08 16:28:25.755640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.579 [2024-10-08 16:28:25.755665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 [2024-10-08 16:28:25.831312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:32.579 [2024-10-08 16:28:25.831412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.579 [2024-10-08 16:28:25.831439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:32.579 [2024-10-08 16:28:25.831457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.579 [2024-10-08 16:28:25.834629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.579 [2024-10-08 16:28:25.834681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:32.579 [2024-10-08 16:28:25.834762] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:32.579 [2024-10-08 16:28:25.834836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:32.579 [2024-10-08 16:28:25.834989] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:32.579 [2024-10-08 16:28:25.835012] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:32.579 [2024-10-08 16:28:25.835113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:32.579 [2024-10-08 16:28:25.835266] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:32.579 [2024-10-08 16:28:25.835281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:32.579 [2024-10-08 16:28:25.835464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.579 pt2 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.579 "name": "raid_bdev1", 00:20:32.579 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:32.579 "strip_size_kb": 0, 00:20:32.579 "state": "online", 00:20:32.579 "raid_level": "raid1", 00:20:32.579 "superblock": true, 00:20:32.579 "num_base_bdevs": 2, 00:20:32.579 "num_base_bdevs_discovered": 1, 00:20:32.579 "num_base_bdevs_operational": 1, 00:20:32.579 "base_bdevs_list": [ 00:20:32.579 { 00:20:32.579 "name": null, 00:20:32.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.579 "is_configured": false, 00:20:32.579 "data_offset": 256, 00:20:32.579 "data_size": 7936 00:20:32.579 }, 00:20:32.579 { 00:20:32.579 "name": "pt2", 00:20:32.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.579 "is_configured": true, 00:20:32.579 "data_offset": 256, 00:20:32.579 "data_size": 7936 00:20:32.579 } 00:20:32.579 ] 00:20:32.579 }' 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.579 16:28:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.146 [2024-10-08 16:28:26.367649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.146 [2024-10-08 16:28:26.367693] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.146 [2024-10-08 16:28:26.367813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.146 [2024-10-08 16:28:26.367898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.146 [2024-10-08 16:28:26.367915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.146 [2024-10-08 16:28:26.431710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:33.146 [2024-10-08 16:28:26.431791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.146 [2024-10-08 16:28:26.431825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:33.146 [2024-10-08 16:28:26.431840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.146 [2024-10-08 16:28:26.434889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.146 [2024-10-08 16:28:26.434950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:33.146 [2024-10-08 16:28:26.435051] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:33.146 [2024-10-08 16:28:26.435129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:33.146 [2024-10-08 16:28:26.435333] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:33.146 [2024-10-08 16:28:26.435353] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.146 [2024-10-08 16:28:26.435382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:33.146 [2024-10-08 16:28:26.435459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:33.146 [2024-10-08 16:28:26.435579] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:33.146 [2024-10-08 16:28:26.435596] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:33.146 [2024-10-08 16:28:26.435701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:33.146 [2024-10-08 16:28:26.435842] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:33.146 [2024-10-08 16:28:26.435869] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:33.146 [2024-10-08 16:28:26.436050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.146 pt1 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.146 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.147 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.147 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.147 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.147 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.147 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.405 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.405 "name": "raid_bdev1", 00:20:33.405 "uuid": "8b82ed53-e4a4-4c6d-8830-69bdd917733c", 00:20:33.405 "strip_size_kb": 0, 00:20:33.405 "state": "online", 00:20:33.405 "raid_level": "raid1", 00:20:33.405 "superblock": true, 00:20:33.405 "num_base_bdevs": 2, 00:20:33.405 "num_base_bdevs_discovered": 1, 00:20:33.405 "num_base_bdevs_operational": 1, 00:20:33.405 "base_bdevs_list": [ 00:20:33.405 { 00:20:33.405 "name": null, 00:20:33.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.405 "is_configured": false, 00:20:33.405 "data_offset": 256, 00:20:33.405 "data_size": 7936 00:20:33.405 }, 00:20:33.405 { 00:20:33.405 "name": "pt2", 00:20:33.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.405 "is_configured": true, 00:20:33.405 "data_offset": 256, 00:20:33.405 "data_size": 7936 00:20:33.405 } 00:20:33.405 ] 00:20:33.405 }' 00:20:33.405 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.405 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.663 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:33.663 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.663 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.663 16:28:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:33.921 16:28:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.921 [2024-10-08 16:28:27.040224] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 8b82ed53-e4a4-4c6d-8830-69bdd917733c '!=' 8b82ed53-e4a4-4c6d-8830-69bdd917733c ']' 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88263 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88263 ']' 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 88263 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88263 00:20:33.921 killing process with pid 88263 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.921 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88263' 00:20:33.922 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 88263 00:20:33.922 [2024-10-08 16:28:27.118750] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.922 16:28:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 88263 00:20:33.922 [2024-10-08 16:28:27.118877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.922 [2024-10-08 16:28:27.118963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.922 [2024-10-08 16:28:27.118987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:34.180 [2024-10-08 16:28:27.341889] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.555 16:28:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:35.555 00:20:35.555 real 0m7.027s 00:20:35.555 user 0m10.779s 00:20:35.555 sys 0m1.143s 00:20:35.555 16:28:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:35.555 16:28:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.555 ************************************ 00:20:35.555 END TEST raid_superblock_test_md_separate 00:20:35.555 ************************************ 00:20:35.555 16:28:28 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:35.555 16:28:28 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:35.555 16:28:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:35.555 16:28:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:35.555 16:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.555 ************************************ 00:20:35.555 START TEST raid_rebuild_test_sb_md_separate 00:20:35.555 ************************************ 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88597 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88597 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88597 ']' 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.555 16:28:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.555 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:35.555 Zero copy mechanism will not be used. 00:20:35.555 [2024-10-08 16:28:28.806740] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:35.555 [2024-10-08 16:28:28.806956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88597 ] 00:20:35.813 [2024-10-08 16:28:28.991159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.073 [2024-10-08 16:28:29.300136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.331 [2024-10-08 16:28:29.526461] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.331 [2024-10-08 16:28:29.526511] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.589 BaseBdev1_malloc 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.589 [2024-10-08 16:28:29.841672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:36.589 [2024-10-08 16:28:29.841767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.589 [2024-10-08 16:28:29.841806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:36.589 [2024-10-08 16:28:29.841826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.589 [2024-10-08 16:28:29.845136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.589 [2024-10-08 16:28:29.845207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:36.589 BaseBdev1 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.589 BaseBdev2_malloc 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.589 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 [2024-10-08 16:28:29.915205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:36.848 [2024-10-08 16:28:29.915313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.848 [2024-10-08 16:28:29.915346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:36.848 [2024-10-08 16:28:29.915366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.848 [2024-10-08 16:28:29.918206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.848 [2024-10-08 16:28:29.918256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:36.848 BaseBdev2 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 spare_malloc 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 spare_delay 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 [2024-10-08 16:28:29.980531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:36.848 [2024-10-08 16:28:29.980618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.848 [2024-10-08 16:28:29.980653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:36.848 [2024-10-08 16:28:29.980674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.848 [2024-10-08 16:28:29.983700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.848 [2024-10-08 16:28:29.983764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:36.848 spare 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 [2024-10-08 16:28:29.992634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.848 [2024-10-08 16:28:29.995230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.848 [2024-10-08 16:28:29.995638] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:36.848 [2024-10-08 16:28:29.995670] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:36.848 [2024-10-08 16:28:29.995770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:36.848 [2024-10-08 16:28:29.995951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:36.848 [2024-10-08 16:28:29.995967] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:36.848 [2024-10-08 16:28:29.996108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.848 16:28:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.848 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.848 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.848 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.848 "name": "raid_bdev1", 00:20:36.848 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:36.848 "strip_size_kb": 0, 00:20:36.848 "state": "online", 00:20:36.848 "raid_level": "raid1", 00:20:36.848 "superblock": true, 00:20:36.848 "num_base_bdevs": 2, 00:20:36.848 "num_base_bdevs_discovered": 2, 00:20:36.848 "num_base_bdevs_operational": 2, 00:20:36.848 "base_bdevs_list": [ 00:20:36.848 { 00:20:36.848 "name": "BaseBdev1", 00:20:36.848 "uuid": "ee603f37-3f71-514e-bd23-dcde47dac280", 00:20:36.849 "is_configured": true, 00:20:36.849 "data_offset": 256, 00:20:36.849 "data_size": 7936 00:20:36.849 }, 00:20:36.849 { 00:20:36.849 "name": "BaseBdev2", 00:20:36.849 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:36.849 "is_configured": true, 00:20:36.849 "data_offset": 256, 00:20:36.849 "data_size": 7936 00:20:36.849 } 00:20:36.849 ] 00:20:36.849 }' 00:20:36.849 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.849 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:37.416 [2024-10-08 16:28:30.509186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:37.416 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:37.675 [2024-10-08 16:28:30.905023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:37.675 /dev/nbd0 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.675 1+0 records in 00:20:37.675 1+0 records out 00:20:37.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000888385 s, 4.6 MB/s 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:37.675 16:28:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:38.611 7936+0 records in 00:20:38.611 7936+0 records out 00:20:38.611 32505856 bytes (33 MB, 31 MiB) copied, 0.92665 s, 35.1 MB/s 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.611 16:28:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:39.179 [2024-10-08 16:28:32.206294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.179 [2024-10-08 16:28:32.222428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.179 "name": "raid_bdev1", 00:20:39.179 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:39.179 "strip_size_kb": 0, 00:20:39.179 "state": "online", 00:20:39.179 "raid_level": "raid1", 00:20:39.179 "superblock": true, 00:20:39.179 "num_base_bdevs": 2, 00:20:39.179 "num_base_bdevs_discovered": 1, 00:20:39.179 "num_base_bdevs_operational": 1, 00:20:39.179 "base_bdevs_list": [ 00:20:39.179 { 00:20:39.179 "name": null, 00:20:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.179 "is_configured": false, 00:20:39.179 "data_offset": 0, 00:20:39.179 "data_size": 7936 00:20:39.179 }, 00:20:39.179 { 00:20:39.179 "name": "BaseBdev2", 00:20:39.179 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:39.179 "is_configured": true, 00:20:39.179 "data_offset": 256, 00:20:39.179 "data_size": 7936 00:20:39.179 } 00:20:39.179 ] 00:20:39.179 }' 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.179 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.438 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:39.438 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.438 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.438 [2024-10-08 16:28:32.718655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.438 [2024-10-08 16:28:32.733259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:39.438 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.438 16:28:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:39.438 [2024-10-08 16:28:32.736063] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.825 "name": "raid_bdev1", 00:20:40.825 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:40.825 "strip_size_kb": 0, 00:20:40.825 "state": "online", 00:20:40.825 "raid_level": "raid1", 00:20:40.825 "superblock": true, 00:20:40.825 "num_base_bdevs": 2, 00:20:40.825 "num_base_bdevs_discovered": 2, 00:20:40.825 "num_base_bdevs_operational": 2, 00:20:40.825 "process": { 00:20:40.825 "type": "rebuild", 00:20:40.825 "target": "spare", 00:20:40.825 "progress": { 00:20:40.825 "blocks": 2560, 00:20:40.825 "percent": 32 00:20:40.825 } 00:20:40.825 }, 00:20:40.825 "base_bdevs_list": [ 00:20:40.825 { 00:20:40.825 "name": "spare", 00:20:40.825 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:40.825 "is_configured": true, 00:20:40.825 "data_offset": 256, 00:20:40.825 "data_size": 7936 00:20:40.825 }, 00:20:40.825 { 00:20:40.825 "name": "BaseBdev2", 00:20:40.825 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:40.825 "is_configured": true, 00:20:40.825 "data_offset": 256, 00:20:40.825 "data_size": 7936 00:20:40.825 } 00:20:40.825 ] 00:20:40.825 }' 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.825 [2024-10-08 16:28:33.906357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.825 [2024-10-08 16:28:33.948100] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:40.825 [2024-10-08 16:28:33.948193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.825 [2024-10-08 16:28:33.948219] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.825 [2024-10-08 16:28:33.948241] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:40.825 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.826 16:28:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.826 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.826 "name": "raid_bdev1", 00:20:40.826 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:40.826 "strip_size_kb": 0, 00:20:40.826 "state": "online", 00:20:40.826 "raid_level": "raid1", 00:20:40.826 "superblock": true, 00:20:40.826 "num_base_bdevs": 2, 00:20:40.826 "num_base_bdevs_discovered": 1, 00:20:40.826 "num_base_bdevs_operational": 1, 00:20:40.826 "base_bdevs_list": [ 00:20:40.826 { 00:20:40.826 "name": null, 00:20:40.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.826 "is_configured": false, 00:20:40.826 "data_offset": 0, 00:20:40.826 "data_size": 7936 00:20:40.826 }, 00:20:40.826 { 00:20:40.826 "name": "BaseBdev2", 00:20:40.826 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:40.826 "is_configured": true, 00:20:40.826 "data_offset": 256, 00:20:40.826 "data_size": 7936 00:20:40.826 } 00:20:40.826 ] 00:20:40.826 }' 00:20:40.826 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.826 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.392 "name": "raid_bdev1", 00:20:41.392 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:41.392 "strip_size_kb": 0, 00:20:41.392 "state": "online", 00:20:41.392 "raid_level": "raid1", 00:20:41.392 "superblock": true, 00:20:41.392 "num_base_bdevs": 2, 00:20:41.392 "num_base_bdevs_discovered": 1, 00:20:41.392 "num_base_bdevs_operational": 1, 00:20:41.392 "base_bdevs_list": [ 00:20:41.392 { 00:20:41.392 "name": null, 00:20:41.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.392 "is_configured": false, 00:20:41.392 "data_offset": 0, 00:20:41.392 "data_size": 7936 00:20:41.392 }, 00:20:41.392 { 00:20:41.392 "name": "BaseBdev2", 00:20:41.392 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:41.392 "is_configured": true, 00:20:41.392 "data_offset": 256, 00:20:41.392 "data_size": 7936 00:20:41.392 } 00:20:41.392 ] 00:20:41.392 }' 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.392 [2024-10-08 16:28:34.636448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.392 [2024-10-08 16:28:34.649948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.392 16:28:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:41.392 [2024-10-08 16:28:34.652740] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.769 "name": "raid_bdev1", 00:20:42.769 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:42.769 "strip_size_kb": 0, 00:20:42.769 "state": "online", 00:20:42.769 "raid_level": "raid1", 00:20:42.769 "superblock": true, 00:20:42.769 "num_base_bdevs": 2, 00:20:42.769 "num_base_bdevs_discovered": 2, 00:20:42.769 "num_base_bdevs_operational": 2, 00:20:42.769 "process": { 00:20:42.769 "type": "rebuild", 00:20:42.769 "target": "spare", 00:20:42.769 "progress": { 00:20:42.769 "blocks": 2560, 00:20:42.769 "percent": 32 00:20:42.769 } 00:20:42.769 }, 00:20:42.769 "base_bdevs_list": [ 00:20:42.769 { 00:20:42.769 "name": "spare", 00:20:42.769 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:42.769 "is_configured": true, 00:20:42.769 "data_offset": 256, 00:20:42.769 "data_size": 7936 00:20:42.769 }, 00:20:42.769 { 00:20:42.769 "name": "BaseBdev2", 00:20:42.769 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:42.769 "is_configured": true, 00:20:42.769 "data_offset": 256, 00:20:42.769 "data_size": 7936 00:20:42.769 } 00:20:42.769 ] 00:20:42.769 }' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:42.769 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=785 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.769 "name": "raid_bdev1", 00:20:42.769 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:42.769 "strip_size_kb": 0, 00:20:42.769 "state": "online", 00:20:42.769 "raid_level": "raid1", 00:20:42.769 "superblock": true, 00:20:42.769 "num_base_bdevs": 2, 00:20:42.769 "num_base_bdevs_discovered": 2, 00:20:42.769 "num_base_bdevs_operational": 2, 00:20:42.769 "process": { 00:20:42.769 "type": "rebuild", 00:20:42.769 "target": "spare", 00:20:42.769 "progress": { 00:20:42.769 "blocks": 2816, 00:20:42.769 "percent": 35 00:20:42.769 } 00:20:42.769 }, 00:20:42.769 "base_bdevs_list": [ 00:20:42.769 { 00:20:42.769 "name": "spare", 00:20:42.769 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:42.769 "is_configured": true, 00:20:42.769 "data_offset": 256, 00:20:42.769 "data_size": 7936 00:20:42.769 }, 00:20:42.769 { 00:20:42.769 "name": "BaseBdev2", 00:20:42.769 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:42.769 "is_configured": true, 00:20:42.769 "data_offset": 256, 00:20:42.769 "data_size": 7936 00:20:42.769 } 00:20:42.769 ] 00:20:42.769 }' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.769 16:28:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.704 16:28:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.963 16:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.963 "name": "raid_bdev1", 00:20:43.963 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:43.963 "strip_size_kb": 0, 00:20:43.963 "state": "online", 00:20:43.963 "raid_level": "raid1", 00:20:43.963 "superblock": true, 00:20:43.963 "num_base_bdevs": 2, 00:20:43.963 "num_base_bdevs_discovered": 2, 00:20:43.963 "num_base_bdevs_operational": 2, 00:20:43.963 "process": { 00:20:43.963 "type": "rebuild", 00:20:43.963 "target": "spare", 00:20:43.963 "progress": { 00:20:43.963 "blocks": 5888, 00:20:43.963 "percent": 74 00:20:43.963 } 00:20:43.963 }, 00:20:43.963 "base_bdevs_list": [ 00:20:43.963 { 00:20:43.963 "name": "spare", 00:20:43.963 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:43.963 "is_configured": true, 00:20:43.963 "data_offset": 256, 00:20:43.963 "data_size": 7936 00:20:43.963 }, 00:20:43.963 { 00:20:43.963 "name": "BaseBdev2", 00:20:43.963 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:43.963 "is_configured": true, 00:20:43.963 "data_offset": 256, 00:20:43.963 "data_size": 7936 00:20:43.963 } 00:20:43.963 ] 00:20:43.963 }' 00:20:43.963 16:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.963 16:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.963 16:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.963 16:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.963 16:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:44.530 [2024-10-08 16:28:37.781965] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:44.530 [2024-10-08 16:28:37.782097] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:44.530 [2024-10-08 16:28:37.782264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.098 "name": "raid_bdev1", 00:20:45.098 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:45.098 "strip_size_kb": 0, 00:20:45.098 "state": "online", 00:20:45.098 "raid_level": "raid1", 00:20:45.098 "superblock": true, 00:20:45.098 "num_base_bdevs": 2, 00:20:45.098 "num_base_bdevs_discovered": 2, 00:20:45.098 "num_base_bdevs_operational": 2, 00:20:45.098 "base_bdevs_list": [ 00:20:45.098 { 00:20:45.098 "name": "spare", 00:20:45.098 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:45.098 "is_configured": true, 00:20:45.098 "data_offset": 256, 00:20:45.098 "data_size": 7936 00:20:45.098 }, 00:20:45.098 { 00:20:45.098 "name": "BaseBdev2", 00:20:45.098 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:45.098 "is_configured": true, 00:20:45.098 "data_offset": 256, 00:20:45.098 "data_size": 7936 00:20:45.098 } 00:20:45.098 ] 00:20:45.098 }' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.098 "name": "raid_bdev1", 00:20:45.098 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:45.098 "strip_size_kb": 0, 00:20:45.098 "state": "online", 00:20:45.098 "raid_level": "raid1", 00:20:45.098 "superblock": true, 00:20:45.098 "num_base_bdevs": 2, 00:20:45.098 "num_base_bdevs_discovered": 2, 00:20:45.098 "num_base_bdevs_operational": 2, 00:20:45.098 "base_bdevs_list": [ 00:20:45.098 { 00:20:45.098 "name": "spare", 00:20:45.098 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:45.098 "is_configured": true, 00:20:45.098 "data_offset": 256, 00:20:45.098 "data_size": 7936 00:20:45.098 }, 00:20:45.098 { 00:20:45.098 "name": "BaseBdev2", 00:20:45.098 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:45.098 "is_configured": true, 00:20:45.098 "data_offset": 256, 00:20:45.098 "data_size": 7936 00:20:45.098 } 00:20:45.098 ] 00:20:45.098 }' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:45.098 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.356 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.356 "name": "raid_bdev1", 00:20:45.357 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:45.357 "strip_size_kb": 0, 00:20:45.357 "state": "online", 00:20:45.357 "raid_level": "raid1", 00:20:45.357 "superblock": true, 00:20:45.357 "num_base_bdevs": 2, 00:20:45.357 "num_base_bdevs_discovered": 2, 00:20:45.357 "num_base_bdevs_operational": 2, 00:20:45.357 "base_bdevs_list": [ 00:20:45.357 { 00:20:45.357 "name": "spare", 00:20:45.357 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:45.357 "is_configured": true, 00:20:45.357 "data_offset": 256, 00:20:45.357 "data_size": 7936 00:20:45.357 }, 00:20:45.357 { 00:20:45.357 "name": "BaseBdev2", 00:20:45.357 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:45.357 "is_configured": true, 00:20:45.357 "data_offset": 256, 00:20:45.357 "data_size": 7936 00:20:45.357 } 00:20:45.357 ] 00:20:45.357 }' 00:20:45.357 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.357 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.922 [2024-10-08 16:28:38.966548] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.922 [2024-10-08 16:28:38.966779] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.922 [2024-10-08 16:28:38.966935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.922 [2024-10-08 16:28:38.967048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.922 [2024-10-08 16:28:38.967066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:45.922 16:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.922 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:45.922 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:45.922 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:45.923 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:46.181 /dev/nbd0 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.181 1+0 records in 00:20:46.181 1+0 records out 00:20:46.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245689 s, 16.7 MB/s 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:46.181 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.182 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:46.182 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:46.440 /dev/nbd1 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.440 1+0 records in 00:20:46.440 1+0 records out 00:20:46.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454911 s, 9.0 MB/s 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:46.440 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.698 16:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.957 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.216 [2024-10-08 16:28:40.508886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:47.216 [2024-10-08 16:28:40.508977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.216 [2024-10-08 16:28:40.509013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:47.216 [2024-10-08 16:28:40.509029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.216 [2024-10-08 16:28:40.512258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.216 [2024-10-08 16:28:40.512461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:47.216 [2024-10-08 16:28:40.512591] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:47.216 [2024-10-08 16:28:40.512666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.216 [2024-10-08 16:28:40.512896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.216 spare 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.216 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.475 [2024-10-08 16:28:40.613109] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:47.475 [2024-10-08 16:28:40.613274] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:47.475 [2024-10-08 16:28:40.613425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:47.475 [2024-10-08 16:28:40.613690] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:47.475 [2024-10-08 16:28:40.613707] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:47.475 [2024-10-08 16:28:40.613862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.475 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.475 "name": "raid_bdev1", 00:20:47.475 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:47.476 "strip_size_kb": 0, 00:20:47.476 "state": "online", 00:20:47.476 "raid_level": "raid1", 00:20:47.476 "superblock": true, 00:20:47.476 "num_base_bdevs": 2, 00:20:47.476 "num_base_bdevs_discovered": 2, 00:20:47.476 "num_base_bdevs_operational": 2, 00:20:47.476 "base_bdevs_list": [ 00:20:47.476 { 00:20:47.476 "name": "spare", 00:20:47.476 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:47.476 "is_configured": true, 00:20:47.476 "data_offset": 256, 00:20:47.476 "data_size": 7936 00:20:47.476 }, 00:20:47.476 { 00:20:47.476 "name": "BaseBdev2", 00:20:47.476 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:47.476 "is_configured": true, 00:20:47.476 "data_offset": 256, 00:20:47.476 "data_size": 7936 00:20:47.476 } 00:20:47.476 ] 00:20:47.476 }' 00:20:47.476 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.476 16:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.043 "name": "raid_bdev1", 00:20:48.043 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:48.043 "strip_size_kb": 0, 00:20:48.043 "state": "online", 00:20:48.043 "raid_level": "raid1", 00:20:48.043 "superblock": true, 00:20:48.043 "num_base_bdevs": 2, 00:20:48.043 "num_base_bdevs_discovered": 2, 00:20:48.043 "num_base_bdevs_operational": 2, 00:20:48.043 "base_bdevs_list": [ 00:20:48.043 { 00:20:48.043 "name": "spare", 00:20:48.043 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:48.043 "is_configured": true, 00:20:48.043 "data_offset": 256, 00:20:48.043 "data_size": 7936 00:20:48.043 }, 00:20:48.043 { 00:20:48.043 "name": "BaseBdev2", 00:20:48.043 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:48.043 "is_configured": true, 00:20:48.043 "data_offset": 256, 00:20:48.043 "data_size": 7936 00:20:48.043 } 00:20:48.043 ] 00:20:48.043 }' 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.043 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.302 [2024-10-08 16:28:41.365401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.302 "name": "raid_bdev1", 00:20:48.302 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:48.302 "strip_size_kb": 0, 00:20:48.302 "state": "online", 00:20:48.302 "raid_level": "raid1", 00:20:48.302 "superblock": true, 00:20:48.302 "num_base_bdevs": 2, 00:20:48.302 "num_base_bdevs_discovered": 1, 00:20:48.302 "num_base_bdevs_operational": 1, 00:20:48.302 "base_bdevs_list": [ 00:20:48.302 { 00:20:48.302 "name": null, 00:20:48.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.302 "is_configured": false, 00:20:48.302 "data_offset": 0, 00:20:48.302 "data_size": 7936 00:20:48.302 }, 00:20:48.302 { 00:20:48.302 "name": "BaseBdev2", 00:20:48.302 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:48.302 "is_configured": true, 00:20:48.302 "data_offset": 256, 00:20:48.302 "data_size": 7936 00:20:48.302 } 00:20:48.302 ] 00:20:48.302 }' 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.302 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:48.868 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.868 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 [2024-10-08 16:28:41.909657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.868 [2024-10-08 16:28:41.910007] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:48.868 [2024-10-08 16:28:41.910032] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:48.868 [2024-10-08 16:28:41.910099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.868 [2024-10-08 16:28:41.923209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:48.868 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.868 16:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:48.868 [2024-10-08 16:28:41.926063] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.803 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.803 "name": "raid_bdev1", 00:20:49.803 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:49.803 "strip_size_kb": 0, 00:20:49.803 "state": "online", 00:20:49.803 "raid_level": "raid1", 00:20:49.803 "superblock": true, 00:20:49.803 "num_base_bdevs": 2, 00:20:49.803 "num_base_bdevs_discovered": 2, 00:20:49.803 "num_base_bdevs_operational": 2, 00:20:49.803 "process": { 00:20:49.803 "type": "rebuild", 00:20:49.803 "target": "spare", 00:20:49.803 "progress": { 00:20:49.803 "blocks": 2560, 00:20:49.804 "percent": 32 00:20:49.804 } 00:20:49.804 }, 00:20:49.804 "base_bdevs_list": [ 00:20:49.804 { 00:20:49.804 "name": "spare", 00:20:49.804 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:49.804 "is_configured": true, 00:20:49.804 "data_offset": 256, 00:20:49.804 "data_size": 7936 00:20:49.804 }, 00:20:49.804 { 00:20:49.804 "name": "BaseBdev2", 00:20:49.804 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:49.804 "is_configured": true, 00:20:49.804 "data_offset": 256, 00:20:49.804 "data_size": 7936 00:20:49.804 } 00:20:49.804 ] 00:20:49.804 }' 00:20:49.804 16:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.804 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.804 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.804 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.804 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:49.804 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.804 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.804 [2024-10-08 16:28:43.099723] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.062 [2024-10-08 16:28:43.138192] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.062 [2024-10-08 16:28:43.138302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.062 [2024-10-08 16:28:43.138346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.063 [2024-10-08 16:28:43.138361] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.063 "name": "raid_bdev1", 00:20:50.063 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:50.063 "strip_size_kb": 0, 00:20:50.063 "state": "online", 00:20:50.063 "raid_level": "raid1", 00:20:50.063 "superblock": true, 00:20:50.063 "num_base_bdevs": 2, 00:20:50.063 "num_base_bdevs_discovered": 1, 00:20:50.063 "num_base_bdevs_operational": 1, 00:20:50.063 "base_bdevs_list": [ 00:20:50.063 { 00:20:50.063 "name": null, 00:20:50.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.063 "is_configured": false, 00:20:50.063 "data_offset": 0, 00:20:50.063 "data_size": 7936 00:20:50.063 }, 00:20:50.063 { 00:20:50.063 "name": "BaseBdev2", 00:20:50.063 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:50.063 "is_configured": true, 00:20:50.063 "data_offset": 256, 00:20:50.063 "data_size": 7936 00:20:50.063 } 00:20:50.063 ] 00:20:50.063 }' 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.063 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.629 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:50.629 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.629 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.629 [2024-10-08 16:28:43.691148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:50.629 [2024-10-08 16:28:43.691267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.629 [2024-10-08 16:28:43.691317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:50.629 [2024-10-08 16:28:43.691354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.629 [2024-10-08 16:28:43.691793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.629 [2024-10-08 16:28:43.691826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:50.629 [2024-10-08 16:28:43.691939] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:50.629 [2024-10-08 16:28:43.691965] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:50.629 [2024-10-08 16:28:43.691985] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:50.629 [2024-10-08 16:28:43.692027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.629 [2024-10-08 16:28:43.706074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:50.629 spare 00:20:50.629 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.629 16:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:50.629 [2024-10-08 16:28:43.708987] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.566 "name": "raid_bdev1", 00:20:51.566 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:51.566 "strip_size_kb": 0, 00:20:51.566 "state": "online", 00:20:51.566 "raid_level": "raid1", 00:20:51.566 "superblock": true, 00:20:51.566 "num_base_bdevs": 2, 00:20:51.566 "num_base_bdevs_discovered": 2, 00:20:51.566 "num_base_bdevs_operational": 2, 00:20:51.566 "process": { 00:20:51.566 "type": "rebuild", 00:20:51.566 "target": "spare", 00:20:51.566 "progress": { 00:20:51.566 "blocks": 2560, 00:20:51.566 "percent": 32 00:20:51.566 } 00:20:51.566 }, 00:20:51.566 "base_bdevs_list": [ 00:20:51.566 { 00:20:51.566 "name": "spare", 00:20:51.566 "uuid": "5be90493-acbf-570e-8d6c-10898896ad4a", 00:20:51.566 "is_configured": true, 00:20:51.566 "data_offset": 256, 00:20:51.566 "data_size": 7936 00:20:51.566 }, 00:20:51.566 { 00:20:51.566 "name": "BaseBdev2", 00:20:51.566 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:51.566 "is_configured": true, 00:20:51.566 "data_offset": 256, 00:20:51.566 "data_size": 7936 00:20:51.566 } 00:20:51.566 ] 00:20:51.566 }' 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.566 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.566 [2024-10-08 16:28:44.879856] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.825 [2024-10-08 16:28:44.921608] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:51.825 [2024-10-08 16:28:44.921883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.825 [2024-10-08 16:28:44.922039] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.825 [2024-10-08 16:28:44.922092] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.825 "name": "raid_bdev1", 00:20:51.825 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:51.825 "strip_size_kb": 0, 00:20:51.825 "state": "online", 00:20:51.825 "raid_level": "raid1", 00:20:51.825 "superblock": true, 00:20:51.825 "num_base_bdevs": 2, 00:20:51.825 "num_base_bdevs_discovered": 1, 00:20:51.825 "num_base_bdevs_operational": 1, 00:20:51.825 "base_bdevs_list": [ 00:20:51.825 { 00:20:51.825 "name": null, 00:20:51.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.825 "is_configured": false, 00:20:51.825 "data_offset": 0, 00:20:51.825 "data_size": 7936 00:20:51.825 }, 00:20:51.825 { 00:20:51.825 "name": "BaseBdev2", 00:20:51.825 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:51.825 "is_configured": true, 00:20:51.825 "data_offset": 256, 00:20:51.825 "data_size": 7936 00:20:51.825 } 00:20:51.825 ] 00:20:51.825 }' 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.825 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.393 "name": "raid_bdev1", 00:20:52.393 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:52.393 "strip_size_kb": 0, 00:20:52.393 "state": "online", 00:20:52.393 "raid_level": "raid1", 00:20:52.393 "superblock": true, 00:20:52.393 "num_base_bdevs": 2, 00:20:52.393 "num_base_bdevs_discovered": 1, 00:20:52.393 "num_base_bdevs_operational": 1, 00:20:52.393 "base_bdevs_list": [ 00:20:52.393 { 00:20:52.393 "name": null, 00:20:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.393 "is_configured": false, 00:20:52.393 "data_offset": 0, 00:20:52.393 "data_size": 7936 00:20:52.393 }, 00:20:52.393 { 00:20:52.393 "name": "BaseBdev2", 00:20:52.393 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:52.393 "is_configured": true, 00:20:52.393 "data_offset": 256, 00:20:52.393 "data_size": 7936 00:20:52.393 } 00:20:52.393 ] 00:20:52.393 }' 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.393 [2024-10-08 16:28:45.598492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:52.393 [2024-10-08 16:28:45.598611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.393 [2024-10-08 16:28:45.598652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:52.393 [2024-10-08 16:28:45.598669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.393 [2024-10-08 16:28:45.598996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.393 [2024-10-08 16:28:45.599033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.393 [2024-10-08 16:28:45.599108] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:52.393 [2024-10-08 16:28:45.599130] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:52.393 [2024-10-08 16:28:45.599143] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:52.393 [2024-10-08 16:28:45.599158] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:52.393 BaseBdev1 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.393 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.330 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.589 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.589 "name": "raid_bdev1", 00:20:53.589 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:53.589 "strip_size_kb": 0, 00:20:53.589 "state": "online", 00:20:53.589 "raid_level": "raid1", 00:20:53.589 "superblock": true, 00:20:53.589 "num_base_bdevs": 2, 00:20:53.589 "num_base_bdevs_discovered": 1, 00:20:53.589 "num_base_bdevs_operational": 1, 00:20:53.589 "base_bdevs_list": [ 00:20:53.589 { 00:20:53.589 "name": null, 00:20:53.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.589 "is_configured": false, 00:20:53.589 "data_offset": 0, 00:20:53.589 "data_size": 7936 00:20:53.589 }, 00:20:53.589 { 00:20:53.589 "name": "BaseBdev2", 00:20:53.589 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:53.589 "is_configured": true, 00:20:53.589 "data_offset": 256, 00:20:53.589 "data_size": 7936 00:20:53.589 } 00:20:53.589 ] 00:20:53.589 }' 00:20:53.589 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.589 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.848 "name": "raid_bdev1", 00:20:53.848 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:53.848 "strip_size_kb": 0, 00:20:53.848 "state": "online", 00:20:53.848 "raid_level": "raid1", 00:20:53.848 "superblock": true, 00:20:53.848 "num_base_bdevs": 2, 00:20:53.848 "num_base_bdevs_discovered": 1, 00:20:53.848 "num_base_bdevs_operational": 1, 00:20:53.848 "base_bdevs_list": [ 00:20:53.848 { 00:20:53.848 "name": null, 00:20:53.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.848 "is_configured": false, 00:20:53.848 "data_offset": 0, 00:20:53.848 "data_size": 7936 00:20:53.848 }, 00:20:53.848 { 00:20:53.848 "name": "BaseBdev2", 00:20:53.848 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:53.848 "is_configured": true, 00:20:53.848 "data_offset": 256, 00:20:53.848 "data_size": 7936 00:20:53.848 } 00:20:53.848 ] 00:20:53.848 }' 00:20:53.848 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.107 [2024-10-08 16:28:47.271180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.107 [2024-10-08 16:28:47.271518] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:54.107 [2024-10-08 16:28:47.271564] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:54.107 request: 00:20:54.107 { 00:20:54.107 "base_bdev": "BaseBdev1", 00:20:54.107 "raid_bdev": "raid_bdev1", 00:20:54.107 "method": "bdev_raid_add_base_bdev", 00:20:54.107 "req_id": 1 00:20:54.107 } 00:20:54.107 Got JSON-RPC error response 00:20:54.107 response: 00:20:54.107 { 00:20:54.107 "code": -22, 00:20:54.107 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:54.107 } 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.107 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.043 "name": "raid_bdev1", 00:20:55.043 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:55.043 "strip_size_kb": 0, 00:20:55.043 "state": "online", 00:20:55.043 "raid_level": "raid1", 00:20:55.043 "superblock": true, 00:20:55.043 "num_base_bdevs": 2, 00:20:55.043 "num_base_bdevs_discovered": 1, 00:20:55.043 "num_base_bdevs_operational": 1, 00:20:55.043 "base_bdevs_list": [ 00:20:55.043 { 00:20:55.043 "name": null, 00:20:55.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.043 "is_configured": false, 00:20:55.043 "data_offset": 0, 00:20:55.043 "data_size": 7936 00:20:55.043 }, 00:20:55.043 { 00:20:55.043 "name": "BaseBdev2", 00:20:55.043 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:55.043 "is_configured": true, 00:20:55.043 "data_offset": 256, 00:20:55.043 "data_size": 7936 00:20:55.043 } 00:20:55.043 ] 00:20:55.043 }' 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.043 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.643 "name": "raid_bdev1", 00:20:55.643 "uuid": "e16c9ca3-9cf2-4a6d-b039-ca55cacd26ee", 00:20:55.643 "strip_size_kb": 0, 00:20:55.643 "state": "online", 00:20:55.643 "raid_level": "raid1", 00:20:55.643 "superblock": true, 00:20:55.643 "num_base_bdevs": 2, 00:20:55.643 "num_base_bdevs_discovered": 1, 00:20:55.643 "num_base_bdevs_operational": 1, 00:20:55.643 "base_bdevs_list": [ 00:20:55.643 { 00:20:55.643 "name": null, 00:20:55.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.643 "is_configured": false, 00:20:55.643 "data_offset": 0, 00:20:55.643 "data_size": 7936 00:20:55.643 }, 00:20:55.643 { 00:20:55.643 "name": "BaseBdev2", 00:20:55.643 "uuid": "ed8d2fca-fe17-54af-998d-23c8b2928b21", 00:20:55.643 "is_configured": true, 00:20:55.643 "data_offset": 256, 00:20:55.643 "data_size": 7936 00:20:55.643 } 00:20:55.643 ] 00:20:55.643 }' 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.643 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88597 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88597 ']' 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88597 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.902 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88597 00:20:55.902 killing process with pid 88597 00:20:55.902 Received shutdown signal, test time was about 60.000000 seconds 00:20:55.902 00:20:55.902 Latency(us) 00:20:55.902 [2024-10-08T16:28:49.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.902 [2024-10-08T16:28:49.224Z] =================================================================================================================== 00:20:55.902 [2024-10-08T16:28:49.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.902 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:55.902 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:55.902 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88597' 00:20:55.902 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88597 00:20:55.902 [2024-10-08 16:28:49.018652] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:55.902 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88597 00:20:55.902 [2024-10-08 16:28:49.018843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:55.902 [2024-10-08 16:28:49.018917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:55.902 [2024-10-08 16:28:49.018938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:56.161 [2024-10-08 16:28:49.318529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.535 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:57.535 00:20:57.535 real 0m21.959s 00:20:57.535 user 0m29.426s 00:20:57.535 sys 0m2.735s 00:20:57.535 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.535 ************************************ 00:20:57.535 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.535 END TEST raid_rebuild_test_sb_md_separate 00:20:57.535 ************************************ 00:20:57.535 16:28:50 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:57.535 16:28:50 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:57.535 16:28:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:57.535 16:28:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:57.535 16:28:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.535 ************************************ 00:20:57.535 START TEST raid_state_function_test_sb_md_interleaved 00:20:57.536 ************************************ 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89305 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89305' 00:20:57.536 Process raid pid: 89305 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89305 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89305 ']' 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.536 16:28:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.536 [2024-10-08 16:28:50.832314] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:57.536 [2024-10-08 16:28:50.832564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.794 [2024-10-08 16:28:51.016630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.053 [2024-10-08 16:28:51.309436] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.312 [2024-10-08 16:28:51.549400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.312 [2024-10-08 16:28:51.549472] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.571 [2024-10-08 16:28:51.865118] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:58.571 [2024-10-08 16:28:51.865185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:58.571 [2024-10-08 16:28:51.865209] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:58.571 [2024-10-08 16:28:51.865235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.571 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.572 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.572 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.572 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.572 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.830 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.830 "name": "Existed_Raid", 00:20:58.830 "uuid": "33f5b5e0-8925-49a7-bd77-50ea212dfec4", 00:20:58.830 "strip_size_kb": 0, 00:20:58.830 "state": "configuring", 00:20:58.830 "raid_level": "raid1", 00:20:58.830 "superblock": true, 00:20:58.830 "num_base_bdevs": 2, 00:20:58.830 "num_base_bdevs_discovered": 0, 00:20:58.830 "num_base_bdevs_operational": 2, 00:20:58.830 "base_bdevs_list": [ 00:20:58.830 { 00:20:58.830 "name": "BaseBdev1", 00:20:58.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.830 "is_configured": false, 00:20:58.830 "data_offset": 0, 00:20:58.830 "data_size": 0 00:20:58.830 }, 00:20:58.830 { 00:20:58.830 "name": "BaseBdev2", 00:20:58.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.830 "is_configured": false, 00:20:58.830 "data_offset": 0, 00:20:58.830 "data_size": 0 00:20:58.830 } 00:20:58.830 ] 00:20:58.830 }' 00:20:58.830 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.830 16:28:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.089 [2024-10-08 16:28:52.389273] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.089 [2024-10-08 16:28:52.389336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.089 [2024-10-08 16:28:52.397246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:59.089 [2024-10-08 16:28:52.397324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:59.089 [2024-10-08 16:28:52.397349] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.089 [2024-10-08 16:28:52.397379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.089 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.349 [2024-10-08 16:28:52.464973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.349 BaseBdev1 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.349 [ 00:20:59.349 { 00:20:59.349 "name": "BaseBdev1", 00:20:59.349 "aliases": [ 00:20:59.349 "dc7a6572-c8af-43f0-bac4-d9a2e9064a5e" 00:20:59.349 ], 00:20:59.349 "product_name": "Malloc disk", 00:20:59.349 "block_size": 4128, 00:20:59.349 "num_blocks": 8192, 00:20:59.349 "uuid": "dc7a6572-c8af-43f0-bac4-d9a2e9064a5e", 00:20:59.349 "md_size": 32, 00:20:59.349 "md_interleave": true, 00:20:59.349 "dif_type": 0, 00:20:59.349 "assigned_rate_limits": { 00:20:59.349 "rw_ios_per_sec": 0, 00:20:59.349 "rw_mbytes_per_sec": 0, 00:20:59.349 "r_mbytes_per_sec": 0, 00:20:59.349 "w_mbytes_per_sec": 0 00:20:59.349 }, 00:20:59.349 "claimed": true, 00:20:59.349 "claim_type": "exclusive_write", 00:20:59.349 "zoned": false, 00:20:59.349 "supported_io_types": { 00:20:59.349 "read": true, 00:20:59.349 "write": true, 00:20:59.349 "unmap": true, 00:20:59.349 "flush": true, 00:20:59.349 "reset": true, 00:20:59.349 "nvme_admin": false, 00:20:59.349 "nvme_io": false, 00:20:59.349 "nvme_io_md": false, 00:20:59.349 "write_zeroes": true, 00:20:59.349 "zcopy": true, 00:20:59.349 "get_zone_info": false, 00:20:59.349 "zone_management": false, 00:20:59.349 "zone_append": false, 00:20:59.349 "compare": false, 00:20:59.349 "compare_and_write": false, 00:20:59.349 "abort": true, 00:20:59.349 "seek_hole": false, 00:20:59.349 "seek_data": false, 00:20:59.349 "copy": true, 00:20:59.349 "nvme_iov_md": false 00:20:59.349 }, 00:20:59.349 "memory_domains": [ 00:20:59.349 { 00:20:59.349 "dma_device_id": "system", 00:20:59.349 "dma_device_type": 1 00:20:59.349 }, 00:20:59.349 { 00:20:59.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.349 "dma_device_type": 2 00:20:59.349 } 00:20:59.349 ], 00:20:59.349 "driver_specific": {} 00:20:59.349 } 00:20:59.349 ] 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.349 "name": "Existed_Raid", 00:20:59.349 "uuid": "af1a15ca-1e04-4fdf-a324-92d453ac4134", 00:20:59.349 "strip_size_kb": 0, 00:20:59.349 "state": "configuring", 00:20:59.349 "raid_level": "raid1", 00:20:59.349 "superblock": true, 00:20:59.349 "num_base_bdevs": 2, 00:20:59.349 "num_base_bdevs_discovered": 1, 00:20:59.349 "num_base_bdevs_operational": 2, 00:20:59.349 "base_bdevs_list": [ 00:20:59.349 { 00:20:59.349 "name": "BaseBdev1", 00:20:59.349 "uuid": "dc7a6572-c8af-43f0-bac4-d9a2e9064a5e", 00:20:59.349 "is_configured": true, 00:20:59.349 "data_offset": 256, 00:20:59.349 "data_size": 7936 00:20:59.349 }, 00:20:59.349 { 00:20:59.349 "name": "BaseBdev2", 00:20:59.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.349 "is_configured": false, 00:20:59.349 "data_offset": 0, 00:20:59.349 "data_size": 0 00:20:59.349 } 00:20:59.349 ] 00:20:59.349 }' 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.349 16:28:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.917 [2024-10-08 16:28:53.009296] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.917 [2024-10-08 16:28:53.009396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.917 [2024-10-08 16:28:53.017352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.917 [2024-10-08 16:28:53.020556] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.917 [2024-10-08 16:28:53.020617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:59.917 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.918 "name": "Existed_Raid", 00:20:59.918 "uuid": "002ae0e8-5b31-43f3-bce3-0c5b731f0850", 00:20:59.918 "strip_size_kb": 0, 00:20:59.918 "state": "configuring", 00:20:59.918 "raid_level": "raid1", 00:20:59.918 "superblock": true, 00:20:59.918 "num_base_bdevs": 2, 00:20:59.918 "num_base_bdevs_discovered": 1, 00:20:59.918 "num_base_bdevs_operational": 2, 00:20:59.918 "base_bdevs_list": [ 00:20:59.918 { 00:20:59.918 "name": "BaseBdev1", 00:20:59.918 "uuid": "dc7a6572-c8af-43f0-bac4-d9a2e9064a5e", 00:20:59.918 "is_configured": true, 00:20:59.918 "data_offset": 256, 00:20:59.918 "data_size": 7936 00:20:59.918 }, 00:20:59.918 { 00:20:59.918 "name": "BaseBdev2", 00:20:59.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.918 "is_configured": false, 00:20:59.918 "data_offset": 0, 00:20:59.918 "data_size": 0 00:20:59.918 } 00:20:59.918 ] 00:20:59.918 }' 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.918 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.485 [2024-10-08 16:28:53.556158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.485 [2024-10-08 16:28:53.556603] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:00.485 [2024-10-08 16:28:53.556625] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:00.485 [2024-10-08 16:28:53.556766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:00.485 [2024-10-08 16:28:53.556886] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:00.485 [2024-10-08 16:28:53.556912] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:00.485 [2024-10-08 16:28:53.557065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.485 BaseBdev2 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:21:00.485 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.486 [ 00:21:00.486 { 00:21:00.486 "name": "BaseBdev2", 00:21:00.486 "aliases": [ 00:21:00.486 "1d3157e6-f412-4d55-867d-8ef17ea5a67c" 00:21:00.486 ], 00:21:00.486 "product_name": "Malloc disk", 00:21:00.486 "block_size": 4128, 00:21:00.486 "num_blocks": 8192, 00:21:00.486 "uuid": "1d3157e6-f412-4d55-867d-8ef17ea5a67c", 00:21:00.486 "md_size": 32, 00:21:00.486 "md_interleave": true, 00:21:00.486 "dif_type": 0, 00:21:00.486 "assigned_rate_limits": { 00:21:00.486 "rw_ios_per_sec": 0, 00:21:00.486 "rw_mbytes_per_sec": 0, 00:21:00.486 "r_mbytes_per_sec": 0, 00:21:00.486 "w_mbytes_per_sec": 0 00:21:00.486 }, 00:21:00.486 "claimed": true, 00:21:00.486 "claim_type": "exclusive_write", 00:21:00.486 "zoned": false, 00:21:00.486 "supported_io_types": { 00:21:00.486 "read": true, 00:21:00.486 "write": true, 00:21:00.486 "unmap": true, 00:21:00.486 "flush": true, 00:21:00.486 "reset": true, 00:21:00.486 "nvme_admin": false, 00:21:00.486 "nvme_io": false, 00:21:00.486 "nvme_io_md": false, 00:21:00.486 "write_zeroes": true, 00:21:00.486 "zcopy": true, 00:21:00.486 "get_zone_info": false, 00:21:00.486 "zone_management": false, 00:21:00.486 "zone_append": false, 00:21:00.486 "compare": false, 00:21:00.486 "compare_and_write": false, 00:21:00.486 "abort": true, 00:21:00.486 "seek_hole": false, 00:21:00.486 "seek_data": false, 00:21:00.486 "copy": true, 00:21:00.486 "nvme_iov_md": false 00:21:00.486 }, 00:21:00.486 "memory_domains": [ 00:21:00.486 { 00:21:00.486 "dma_device_id": "system", 00:21:00.486 "dma_device_type": 1 00:21:00.486 }, 00:21:00.486 { 00:21:00.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.486 "dma_device_type": 2 00:21:00.486 } 00:21:00.486 ], 00:21:00.486 "driver_specific": {} 00:21:00.486 } 00:21:00.486 ] 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.486 "name": "Existed_Raid", 00:21:00.486 "uuid": "002ae0e8-5b31-43f3-bce3-0c5b731f0850", 00:21:00.486 "strip_size_kb": 0, 00:21:00.486 "state": "online", 00:21:00.486 "raid_level": "raid1", 00:21:00.486 "superblock": true, 00:21:00.486 "num_base_bdevs": 2, 00:21:00.486 "num_base_bdevs_discovered": 2, 00:21:00.486 "num_base_bdevs_operational": 2, 00:21:00.486 "base_bdevs_list": [ 00:21:00.486 { 00:21:00.486 "name": "BaseBdev1", 00:21:00.486 "uuid": "dc7a6572-c8af-43f0-bac4-d9a2e9064a5e", 00:21:00.486 "is_configured": true, 00:21:00.486 "data_offset": 256, 00:21:00.486 "data_size": 7936 00:21:00.486 }, 00:21:00.486 { 00:21:00.486 "name": "BaseBdev2", 00:21:00.486 "uuid": "1d3157e6-f412-4d55-867d-8ef17ea5a67c", 00:21:00.486 "is_configured": true, 00:21:00.486 "data_offset": 256, 00:21:00.486 "data_size": 7936 00:21:00.486 } 00:21:00.486 ] 00:21:00.486 }' 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.486 16:28:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.057 [2024-10-08 16:28:54.144921] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.057 "name": "Existed_Raid", 00:21:01.057 "aliases": [ 00:21:01.057 "002ae0e8-5b31-43f3-bce3-0c5b731f0850" 00:21:01.057 ], 00:21:01.057 "product_name": "Raid Volume", 00:21:01.057 "block_size": 4128, 00:21:01.057 "num_blocks": 7936, 00:21:01.057 "uuid": "002ae0e8-5b31-43f3-bce3-0c5b731f0850", 00:21:01.057 "md_size": 32, 00:21:01.057 "md_interleave": true, 00:21:01.057 "dif_type": 0, 00:21:01.057 "assigned_rate_limits": { 00:21:01.057 "rw_ios_per_sec": 0, 00:21:01.057 "rw_mbytes_per_sec": 0, 00:21:01.057 "r_mbytes_per_sec": 0, 00:21:01.057 "w_mbytes_per_sec": 0 00:21:01.057 }, 00:21:01.057 "claimed": false, 00:21:01.057 "zoned": false, 00:21:01.057 "supported_io_types": { 00:21:01.057 "read": true, 00:21:01.057 "write": true, 00:21:01.057 "unmap": false, 00:21:01.057 "flush": false, 00:21:01.057 "reset": true, 00:21:01.057 "nvme_admin": false, 00:21:01.057 "nvme_io": false, 00:21:01.057 "nvme_io_md": false, 00:21:01.057 "write_zeroes": true, 00:21:01.057 "zcopy": false, 00:21:01.057 "get_zone_info": false, 00:21:01.057 "zone_management": false, 00:21:01.057 "zone_append": false, 00:21:01.057 "compare": false, 00:21:01.057 "compare_and_write": false, 00:21:01.057 "abort": false, 00:21:01.057 "seek_hole": false, 00:21:01.057 "seek_data": false, 00:21:01.057 "copy": false, 00:21:01.057 "nvme_iov_md": false 00:21:01.057 }, 00:21:01.057 "memory_domains": [ 00:21:01.057 { 00:21:01.057 "dma_device_id": "system", 00:21:01.057 "dma_device_type": 1 00:21:01.057 }, 00:21:01.057 { 00:21:01.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.057 "dma_device_type": 2 00:21:01.057 }, 00:21:01.057 { 00:21:01.057 "dma_device_id": "system", 00:21:01.057 "dma_device_type": 1 00:21:01.057 }, 00:21:01.057 { 00:21:01.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.057 "dma_device_type": 2 00:21:01.057 } 00:21:01.057 ], 00:21:01.057 "driver_specific": { 00:21:01.057 "raid": { 00:21:01.057 "uuid": "002ae0e8-5b31-43f3-bce3-0c5b731f0850", 00:21:01.057 "strip_size_kb": 0, 00:21:01.057 "state": "online", 00:21:01.057 "raid_level": "raid1", 00:21:01.057 "superblock": true, 00:21:01.057 "num_base_bdevs": 2, 00:21:01.057 "num_base_bdevs_discovered": 2, 00:21:01.057 "num_base_bdevs_operational": 2, 00:21:01.057 "base_bdevs_list": [ 00:21:01.057 { 00:21:01.057 "name": "BaseBdev1", 00:21:01.057 "uuid": "dc7a6572-c8af-43f0-bac4-d9a2e9064a5e", 00:21:01.057 "is_configured": true, 00:21:01.057 "data_offset": 256, 00:21:01.057 "data_size": 7936 00:21:01.057 }, 00:21:01.057 { 00:21:01.057 "name": "BaseBdev2", 00:21:01.057 "uuid": "1d3157e6-f412-4d55-867d-8ef17ea5a67c", 00:21:01.057 "is_configured": true, 00:21:01.057 "data_offset": 256, 00:21:01.057 "data_size": 7936 00:21:01.057 } 00:21:01.057 ] 00:21:01.057 } 00:21:01.057 } 00:21:01.057 }' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:01.057 BaseBdev2' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.057 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.320 [2024-10-08 16:28:54.400626] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:01.320 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.321 "name": "Existed_Raid", 00:21:01.321 "uuid": "002ae0e8-5b31-43f3-bce3-0c5b731f0850", 00:21:01.321 "strip_size_kb": 0, 00:21:01.321 "state": "online", 00:21:01.321 "raid_level": "raid1", 00:21:01.321 "superblock": true, 00:21:01.321 "num_base_bdevs": 2, 00:21:01.321 "num_base_bdevs_discovered": 1, 00:21:01.321 "num_base_bdevs_operational": 1, 00:21:01.321 "base_bdevs_list": [ 00:21:01.321 { 00:21:01.321 "name": null, 00:21:01.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.321 "is_configured": false, 00:21:01.321 "data_offset": 0, 00:21:01.321 "data_size": 7936 00:21:01.321 }, 00:21:01.321 { 00:21:01.321 "name": "BaseBdev2", 00:21:01.321 "uuid": "1d3157e6-f412-4d55-867d-8ef17ea5a67c", 00:21:01.321 "is_configured": true, 00:21:01.321 "data_offset": 256, 00:21:01.321 "data_size": 7936 00:21:01.321 } 00:21:01.321 ] 00:21:01.321 }' 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.321 16:28:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.888 [2024-10-08 16:28:55.071082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:01.888 [2024-10-08 16:28:55.071246] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.888 [2024-10-08 16:28:55.158643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.888 [2024-10-08 16:28:55.158747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.888 [2024-10-08 16:28:55.158769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.888 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89305 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89305 ']' 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89305 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89305 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.147 killing process with pid 89305 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89305' 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89305 00:21:02.147 16:28:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89305 00:21:02.147 [2024-10-08 16:28:55.251699] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.147 [2024-10-08 16:28:55.267011] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:03.525 16:28:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:03.525 ************************************ 00:21:03.525 END TEST raid_state_function_test_sb_md_interleaved 00:21:03.525 ************************************ 00:21:03.525 00:21:03.525 real 0m5.885s 00:21:03.525 user 0m8.628s 00:21:03.525 sys 0m0.922s 00:21:03.525 16:28:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.525 16:28:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.525 16:28:56 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:03.525 16:28:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:03.525 16:28:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.525 16:28:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.525 ************************************ 00:21:03.525 START TEST raid_superblock_test_md_interleaved 00:21:03.525 ************************************ 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89557 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89557 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89557 ']' 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.525 16:28:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.525 [2024-10-08 16:28:56.768144] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:03.525 [2024-10-08 16:28:56.768352] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89557 ] 00:21:03.784 [2024-10-08 16:28:56.949130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.042 [2024-10-08 16:28:57.213371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.301 [2024-10-08 16:28:57.446043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.301 [2024-10-08 16:28:57.446091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.560 malloc1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.560 [2024-10-08 16:28:57.783746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:04.560 [2024-10-08 16:28:57.783835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.560 [2024-10-08 16:28:57.783886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:04.560 [2024-10-08 16:28:57.783911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.560 [2024-10-08 16:28:57.787090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.560 [2024-10-08 16:28:57.787136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:04.560 pt1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.560 malloc2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.560 [2024-10-08 16:28:57.857228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:04.560 [2024-10-08 16:28:57.857308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.560 [2024-10-08 16:28:57.857352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:04.560 [2024-10-08 16:28:57.857405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.560 [2024-10-08 16:28:57.860212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.560 [2024-10-08 16:28:57.860257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:04.560 pt2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.560 [2024-10-08 16:28:57.865345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:04.560 [2024-10-08 16:28:57.868085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.560 [2024-10-08 16:28:57.868346] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:04.560 [2024-10-08 16:28:57.868400] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:04.560 [2024-10-08 16:28:57.868525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:04.560 [2024-10-08 16:28:57.868653] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:04.560 [2024-10-08 16:28:57.868718] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:04.560 [2024-10-08 16:28:57.868906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.560 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.819 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.819 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.819 "name": "raid_bdev1", 00:21:04.819 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:04.819 "strip_size_kb": 0, 00:21:04.819 "state": "online", 00:21:04.819 "raid_level": "raid1", 00:21:04.819 "superblock": true, 00:21:04.819 "num_base_bdevs": 2, 00:21:04.819 "num_base_bdevs_discovered": 2, 00:21:04.819 "num_base_bdevs_operational": 2, 00:21:04.819 "base_bdevs_list": [ 00:21:04.819 { 00:21:04.819 "name": "pt1", 00:21:04.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.819 "is_configured": true, 00:21:04.819 "data_offset": 256, 00:21:04.819 "data_size": 7936 00:21:04.819 }, 00:21:04.819 { 00:21:04.819 "name": "pt2", 00:21:04.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.819 "is_configured": true, 00:21:04.819 "data_offset": 256, 00:21:04.819 "data_size": 7936 00:21:04.819 } 00:21:04.819 ] 00:21:04.819 }' 00:21:04.819 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.819 16:28:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:05.078 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.336 [2024-10-08 16:28:58.409947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.336 "name": "raid_bdev1", 00:21:05.336 "aliases": [ 00:21:05.336 "1e5d8e8c-f757-4c4f-a7db-306977414857" 00:21:05.336 ], 00:21:05.336 "product_name": "Raid Volume", 00:21:05.336 "block_size": 4128, 00:21:05.336 "num_blocks": 7936, 00:21:05.336 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:05.336 "md_size": 32, 00:21:05.336 "md_interleave": true, 00:21:05.336 "dif_type": 0, 00:21:05.336 "assigned_rate_limits": { 00:21:05.336 "rw_ios_per_sec": 0, 00:21:05.336 "rw_mbytes_per_sec": 0, 00:21:05.336 "r_mbytes_per_sec": 0, 00:21:05.336 "w_mbytes_per_sec": 0 00:21:05.336 }, 00:21:05.336 "claimed": false, 00:21:05.336 "zoned": false, 00:21:05.336 "supported_io_types": { 00:21:05.336 "read": true, 00:21:05.336 "write": true, 00:21:05.336 "unmap": false, 00:21:05.336 "flush": false, 00:21:05.336 "reset": true, 00:21:05.336 "nvme_admin": false, 00:21:05.336 "nvme_io": false, 00:21:05.336 "nvme_io_md": false, 00:21:05.336 "write_zeroes": true, 00:21:05.336 "zcopy": false, 00:21:05.336 "get_zone_info": false, 00:21:05.336 "zone_management": false, 00:21:05.336 "zone_append": false, 00:21:05.336 "compare": false, 00:21:05.336 "compare_and_write": false, 00:21:05.336 "abort": false, 00:21:05.336 "seek_hole": false, 00:21:05.336 "seek_data": false, 00:21:05.336 "copy": false, 00:21:05.336 "nvme_iov_md": false 00:21:05.336 }, 00:21:05.336 "memory_domains": [ 00:21:05.336 { 00:21:05.336 "dma_device_id": "system", 00:21:05.336 "dma_device_type": 1 00:21:05.336 }, 00:21:05.336 { 00:21:05.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.336 "dma_device_type": 2 00:21:05.336 }, 00:21:05.336 { 00:21:05.336 "dma_device_id": "system", 00:21:05.336 "dma_device_type": 1 00:21:05.336 }, 00:21:05.336 { 00:21:05.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.336 "dma_device_type": 2 00:21:05.336 } 00:21:05.336 ], 00:21:05.336 "driver_specific": { 00:21:05.336 "raid": { 00:21:05.336 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:05.336 "strip_size_kb": 0, 00:21:05.336 "state": "online", 00:21:05.336 "raid_level": "raid1", 00:21:05.336 "superblock": true, 00:21:05.336 "num_base_bdevs": 2, 00:21:05.336 "num_base_bdevs_discovered": 2, 00:21:05.336 "num_base_bdevs_operational": 2, 00:21:05.336 "base_bdevs_list": [ 00:21:05.336 { 00:21:05.336 "name": "pt1", 00:21:05.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.336 "is_configured": true, 00:21:05.336 "data_offset": 256, 00:21:05.336 "data_size": 7936 00:21:05.336 }, 00:21:05.336 { 00:21:05.336 "name": "pt2", 00:21:05.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.336 "is_configured": true, 00:21:05.336 "data_offset": 256, 00:21:05.336 "data_size": 7936 00:21:05.336 } 00:21:05.336 ] 00:21:05.336 } 00:21:05.336 } 00:21:05.336 }' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:05.336 pt2' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.336 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:05.595 [2024-10-08 16:28:58.669860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1e5d8e8c-f757-4c4f-a7db-306977414857 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 1e5d8e8c-f757-4c4f-a7db-306977414857 ']' 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 [2024-10-08 16:28:58.713488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.595 [2024-10-08 16:28:58.713695] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.595 [2024-10-08 16:28:58.713828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.595 [2024-10-08 16:28:58.713940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.595 [2024-10-08 16:28:58.713963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.595 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 [2024-10-08 16:28:58.845613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:05.595 [2024-10-08 16:28:58.848595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:05.595 [2024-10-08 16:28:58.848714] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:05.595 [2024-10-08 16:28:58.848821] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:05.595 [2024-10-08 16:28:58.848892] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.595 [2024-10-08 16:28:58.848929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:05.595 request: 00:21:05.595 { 00:21:05.595 "name": "raid_bdev1", 00:21:05.595 "raid_level": "raid1", 00:21:05.595 "base_bdevs": [ 00:21:05.595 "malloc1", 00:21:05.595 "malloc2" 00:21:05.595 ], 00:21:05.595 "superblock": false, 00:21:05.595 "method": "bdev_raid_create", 00:21:05.595 "req_id": 1 00:21:05.595 } 00:21:05.595 Got JSON-RPC error response 00:21:05.595 response: 00:21:05.596 { 00:21:05.596 "code": -17, 00:21:05.596 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:05.596 } 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.596 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 [2024-10-08 16:28:58.913799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:05.596 [2024-10-08 16:28:58.913887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.596 [2024-10-08 16:28:58.913957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:05.596 [2024-10-08 16:28:58.913986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.855 [2024-10-08 16:28:58.917141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.855 [2024-10-08 16:28:58.917346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:05.855 [2024-10-08 16:28:58.917442] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:05.855 [2024-10-08 16:28:58.917588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:05.855 pt1 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.855 "name": "raid_bdev1", 00:21:05.855 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:05.855 "strip_size_kb": 0, 00:21:05.855 "state": "configuring", 00:21:05.855 "raid_level": "raid1", 00:21:05.855 "superblock": true, 00:21:05.855 "num_base_bdevs": 2, 00:21:05.855 "num_base_bdevs_discovered": 1, 00:21:05.855 "num_base_bdevs_operational": 2, 00:21:05.855 "base_bdevs_list": [ 00:21:05.855 { 00:21:05.855 "name": "pt1", 00:21:05.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.855 "is_configured": true, 00:21:05.855 "data_offset": 256, 00:21:05.855 "data_size": 7936 00:21:05.855 }, 00:21:05.855 { 00:21:05.855 "name": null, 00:21:05.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.855 "is_configured": false, 00:21:05.855 "data_offset": 256, 00:21:05.855 "data_size": 7936 00:21:05.855 } 00:21:05.855 ] 00:21:05.855 }' 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.855 16:28:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.422 [2024-10-08 16:28:59.454016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:06.422 [2024-10-08 16:28:59.454168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.422 [2024-10-08 16:28:59.454215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:06.422 [2024-10-08 16:28:59.454244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.422 [2024-10-08 16:28:59.454713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.422 [2024-10-08 16:28:59.454765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:06.422 [2024-10-08 16:28:59.454870] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:06.422 [2024-10-08 16:28:59.454946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:06.422 [2024-10-08 16:28:59.455107] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:06.422 [2024-10-08 16:28:59.455130] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:06.422 [2024-10-08 16:28:59.455247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:06.422 [2024-10-08 16:28:59.455345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:06.422 [2024-10-08 16:28:59.455360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:06.422 [2024-10-08 16:28:59.455473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.422 pt2 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.422 "name": "raid_bdev1", 00:21:06.422 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:06.422 "strip_size_kb": 0, 00:21:06.422 "state": "online", 00:21:06.422 "raid_level": "raid1", 00:21:06.422 "superblock": true, 00:21:06.422 "num_base_bdevs": 2, 00:21:06.422 "num_base_bdevs_discovered": 2, 00:21:06.422 "num_base_bdevs_operational": 2, 00:21:06.422 "base_bdevs_list": [ 00:21:06.422 { 00:21:06.422 "name": "pt1", 00:21:06.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:06.422 "is_configured": true, 00:21:06.422 "data_offset": 256, 00:21:06.422 "data_size": 7936 00:21:06.422 }, 00:21:06.422 { 00:21:06.422 "name": "pt2", 00:21:06.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.422 "is_configured": true, 00:21:06.422 "data_offset": 256, 00:21:06.422 "data_size": 7936 00:21:06.422 } 00:21:06.422 ] 00:21:06.422 }' 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.422 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 16:28:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:06.681 [2024-10-08 16:28:59.998453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:06.994 "name": "raid_bdev1", 00:21:06.994 "aliases": [ 00:21:06.994 "1e5d8e8c-f757-4c4f-a7db-306977414857" 00:21:06.994 ], 00:21:06.994 "product_name": "Raid Volume", 00:21:06.994 "block_size": 4128, 00:21:06.994 "num_blocks": 7936, 00:21:06.994 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:06.994 "md_size": 32, 00:21:06.994 "md_interleave": true, 00:21:06.994 "dif_type": 0, 00:21:06.994 "assigned_rate_limits": { 00:21:06.994 "rw_ios_per_sec": 0, 00:21:06.994 "rw_mbytes_per_sec": 0, 00:21:06.994 "r_mbytes_per_sec": 0, 00:21:06.994 "w_mbytes_per_sec": 0 00:21:06.994 }, 00:21:06.994 "claimed": false, 00:21:06.994 "zoned": false, 00:21:06.994 "supported_io_types": { 00:21:06.994 "read": true, 00:21:06.994 "write": true, 00:21:06.994 "unmap": false, 00:21:06.994 "flush": false, 00:21:06.994 "reset": true, 00:21:06.994 "nvme_admin": false, 00:21:06.994 "nvme_io": false, 00:21:06.994 "nvme_io_md": false, 00:21:06.994 "write_zeroes": true, 00:21:06.994 "zcopy": false, 00:21:06.994 "get_zone_info": false, 00:21:06.994 "zone_management": false, 00:21:06.994 "zone_append": false, 00:21:06.994 "compare": false, 00:21:06.994 "compare_and_write": false, 00:21:06.994 "abort": false, 00:21:06.994 "seek_hole": false, 00:21:06.994 "seek_data": false, 00:21:06.994 "copy": false, 00:21:06.994 "nvme_iov_md": false 00:21:06.994 }, 00:21:06.994 "memory_domains": [ 00:21:06.994 { 00:21:06.994 "dma_device_id": "system", 00:21:06.994 "dma_device_type": 1 00:21:06.994 }, 00:21:06.994 { 00:21:06.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.994 "dma_device_type": 2 00:21:06.994 }, 00:21:06.994 { 00:21:06.994 "dma_device_id": "system", 00:21:06.994 "dma_device_type": 1 00:21:06.994 }, 00:21:06.994 { 00:21:06.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.994 "dma_device_type": 2 00:21:06.994 } 00:21:06.994 ], 00:21:06.994 "driver_specific": { 00:21:06.994 "raid": { 00:21:06.994 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:06.994 "strip_size_kb": 0, 00:21:06.994 "state": "online", 00:21:06.994 "raid_level": "raid1", 00:21:06.994 "superblock": true, 00:21:06.994 "num_base_bdevs": 2, 00:21:06.994 "num_base_bdevs_discovered": 2, 00:21:06.994 "num_base_bdevs_operational": 2, 00:21:06.994 "base_bdevs_list": [ 00:21:06.994 { 00:21:06.994 "name": "pt1", 00:21:06.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:06.994 "is_configured": true, 00:21:06.994 "data_offset": 256, 00:21:06.994 "data_size": 7936 00:21:06.994 }, 00:21:06.994 { 00:21:06.994 "name": "pt2", 00:21:06.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.994 "is_configured": true, 00:21:06.994 "data_offset": 256, 00:21:06.994 "data_size": 7936 00:21:06.994 } 00:21:06.994 ] 00:21:06.994 } 00:21:06.994 } 00:21:06.994 }' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:06.994 pt2' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.994 [2024-10-08 16:29:00.270486] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.994 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 1e5d8e8c-f757-4c4f-a7db-306977414857 '!=' 1e5d8e8c-f757-4c4f-a7db-306977414857 ']' 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.253 [2024-10-08 16:29:00.322243] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.253 "name": "raid_bdev1", 00:21:07.253 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:07.253 "strip_size_kb": 0, 00:21:07.253 "state": "online", 00:21:07.253 "raid_level": "raid1", 00:21:07.253 "superblock": true, 00:21:07.253 "num_base_bdevs": 2, 00:21:07.253 "num_base_bdevs_discovered": 1, 00:21:07.253 "num_base_bdevs_operational": 1, 00:21:07.253 "base_bdevs_list": [ 00:21:07.253 { 00:21:07.253 "name": null, 00:21:07.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.253 "is_configured": false, 00:21:07.253 "data_offset": 0, 00:21:07.253 "data_size": 7936 00:21:07.253 }, 00:21:07.253 { 00:21:07.253 "name": "pt2", 00:21:07.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.253 "is_configured": true, 00:21:07.253 "data_offset": 256, 00:21:07.253 "data_size": 7936 00:21:07.253 } 00:21:07.253 ] 00:21:07.253 }' 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.253 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.820 [2024-10-08 16:29:00.866416] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.820 [2024-10-08 16:29:00.866451] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.820 [2024-10-08 16:29:00.866629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.820 [2024-10-08 16:29:00.866711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.820 [2024-10-08 16:29:00.866738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.820 [2024-10-08 16:29:00.938423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:07.820 [2024-10-08 16:29:00.938561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.820 [2024-10-08 16:29:00.938604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:07.820 [2024-10-08 16:29:00.938632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.820 [2024-10-08 16:29:00.941947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.820 [2024-10-08 16:29:00.942000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:07.820 [2024-10-08 16:29:00.942148] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:07.820 [2024-10-08 16:29:00.942272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:07.820 [2024-10-08 16:29:00.942458] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:07.820 [2024-10-08 16:29:00.942490] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:07.820 [2024-10-08 16:29:00.942696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:07.820 [2024-10-08 16:29:00.942858] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:07.820 [2024-10-08 16:29:00.942907] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:07.820 [2024-10-08 16:29:00.943117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.820 pt2 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.820 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.821 "name": "raid_bdev1", 00:21:07.821 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:07.821 "strip_size_kb": 0, 00:21:07.821 "state": "online", 00:21:07.821 "raid_level": "raid1", 00:21:07.821 "superblock": true, 00:21:07.821 "num_base_bdevs": 2, 00:21:07.821 "num_base_bdevs_discovered": 1, 00:21:07.821 "num_base_bdevs_operational": 1, 00:21:07.821 "base_bdevs_list": [ 00:21:07.821 { 00:21:07.821 "name": null, 00:21:07.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.821 "is_configured": false, 00:21:07.821 "data_offset": 256, 00:21:07.821 "data_size": 7936 00:21:07.821 }, 00:21:07.821 { 00:21:07.821 "name": "pt2", 00:21:07.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.821 "is_configured": true, 00:21:07.821 "data_offset": 256, 00:21:07.821 "data_size": 7936 00:21:07.821 } 00:21:07.821 ] 00:21:07.821 }' 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.821 16:29:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.387 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:08.387 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.387 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.388 [2024-10-08 16:29:01.478768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.388 [2024-10-08 16:29:01.478816] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.388 [2024-10-08 16:29:01.479042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.388 [2024-10-08 16:29:01.479162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.388 [2024-10-08 16:29:01.479199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.388 [2024-10-08 16:29:01.542787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.388 [2024-10-08 16:29:01.542904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.388 [2024-10-08 16:29:01.542948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:08.388 [2024-10-08 16:29:01.542970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.388 [2024-10-08 16:29:01.546195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.388 [2024-10-08 16:29:01.546238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.388 [2024-10-08 16:29:01.546376] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.388 [2024-10-08 16:29:01.546485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.388 [2024-10-08 16:29:01.546737] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:08.388 [2024-10-08 16:29:01.546764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.388 [2024-10-08 16:29:01.546808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:08.388 [2024-10-08 16:29:01.546950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.388 [2024-10-08 16:29:01.547169] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:08.388 [2024-10-08 16:29:01.547197] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:08.388 [2024-10-08 16:29:01.547327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:08.388 [2024-10-08 16:29:01.547487] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:08.388 [2024-10-08 16:29:01.547558] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:08.388 [2024-10-08 16:29:01.547751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.388 pt1 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.388 "name": "raid_bdev1", 00:21:08.388 "uuid": "1e5d8e8c-f757-4c4f-a7db-306977414857", 00:21:08.388 "strip_size_kb": 0, 00:21:08.388 "state": "online", 00:21:08.388 "raid_level": "raid1", 00:21:08.388 "superblock": true, 00:21:08.388 "num_base_bdevs": 2, 00:21:08.388 "num_base_bdevs_discovered": 1, 00:21:08.388 "num_base_bdevs_operational": 1, 00:21:08.388 "base_bdevs_list": [ 00:21:08.388 { 00:21:08.388 "name": null, 00:21:08.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.388 "is_configured": false, 00:21:08.388 "data_offset": 256, 00:21:08.388 "data_size": 7936 00:21:08.388 }, 00:21:08.388 { 00:21:08.388 "name": "pt2", 00:21:08.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.388 "is_configured": true, 00:21:08.388 "data_offset": 256, 00:21:08.388 "data_size": 7936 00:21:08.388 } 00:21:08.388 ] 00:21:08.388 }' 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.388 16:29:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.955 [2024-10-08 16:29:02.135356] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 1e5d8e8c-f757-4c4f-a7db-306977414857 '!=' 1e5d8e8c-f757-4c4f-a7db-306977414857 ']' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89557 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89557 ']' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89557 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89557 00:21:08.955 killing process with pid 89557 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89557' 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89557 00:21:08.955 [2024-10-08 16:29:02.218638] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:08.955 16:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89557 00:21:08.955 [2024-10-08 16:29:02.218779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.955 [2024-10-08 16:29:02.218856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.955 [2024-10-08 16:29:02.218882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:09.214 [2024-10-08 16:29:02.411583] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.590 16:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:10.590 00:21:10.590 real 0m7.073s 00:21:10.590 user 0m10.920s 00:21:10.590 sys 0m1.125s 00:21:10.590 16:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.590 ************************************ 00:21:10.590 END TEST raid_superblock_test_md_interleaved 00:21:10.590 ************************************ 00:21:10.590 16:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.590 16:29:03 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:10.590 16:29:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:10.590 16:29:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.590 16:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.590 ************************************ 00:21:10.590 START TEST raid_rebuild_test_sb_md_interleaved 00:21:10.590 ************************************ 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:10.590 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89891 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89891 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89891 ']' 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.591 16:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.591 [2024-10-08 16:29:03.895053] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:10.591 [2024-10-08 16:29:03.895233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.591 Zero copy mechanism will not be used. 00:21:10.591 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89891 ] 00:21:10.850 [2024-10-08 16:29:04.064150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.109 [2024-10-08 16:29:04.336778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.368 [2024-10-08 16:29:04.565836] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.368 [2024-10-08 16:29:04.565905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.626 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.627 BaseBdev1_malloc 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.627 [2024-10-08 16:29:04.891001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.627 [2024-10-08 16:29:04.891106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.627 [2024-10-08 16:29:04.891140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:11.627 [2024-10-08 16:29:04.891159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.627 [2024-10-08 16:29:04.894069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.627 [2024-10-08 16:29:04.894132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.627 BaseBdev1 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.627 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.885 BaseBdev2_malloc 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.885 [2024-10-08 16:29:04.963080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:11.885 [2024-10-08 16:29:04.963152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.885 [2024-10-08 16:29:04.963178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:11.885 [2024-10-08 16:29:04.963194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.885 [2024-10-08 16:29:04.965991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.885 [2024-10-08 16:29:04.966037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:11.885 BaseBdev2 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.885 16:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.885 spare_malloc 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.885 spare_delay 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.885 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.885 [2024-10-08 16:29:05.032082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:11.886 [2024-10-08 16:29:05.032185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.886 [2024-10-08 16:29:05.032214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:11.886 [2024-10-08 16:29:05.032231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.886 [2024-10-08 16:29:05.035184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.886 [2024-10-08 16:29:05.035230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:11.886 spare 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.886 [2024-10-08 16:29:05.044219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.886 [2024-10-08 16:29:05.047041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.886 [2024-10-08 16:29:05.047497] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:11.886 [2024-10-08 16:29:05.047541] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:11.886 [2024-10-08 16:29:05.047651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:11.886 [2024-10-08 16:29:05.047773] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:11.886 [2024-10-08 16:29:05.047787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:11.886 [2024-10-08 16:29:05.047883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.886 "name": "raid_bdev1", 00:21:11.886 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:11.886 "strip_size_kb": 0, 00:21:11.886 "state": "online", 00:21:11.886 "raid_level": "raid1", 00:21:11.886 "superblock": true, 00:21:11.886 "num_base_bdevs": 2, 00:21:11.886 "num_base_bdevs_discovered": 2, 00:21:11.886 "num_base_bdevs_operational": 2, 00:21:11.886 "base_bdevs_list": [ 00:21:11.886 { 00:21:11.886 "name": "BaseBdev1", 00:21:11.886 "uuid": "060d2af0-da47-541e-9823-bf69c50b1757", 00:21:11.886 "is_configured": true, 00:21:11.886 "data_offset": 256, 00:21:11.886 "data_size": 7936 00:21:11.886 }, 00:21:11.886 { 00:21:11.886 "name": "BaseBdev2", 00:21:11.886 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:11.886 "is_configured": true, 00:21:11.886 "data_offset": 256, 00:21:11.886 "data_size": 7936 00:21:11.886 } 00:21:11.886 ] 00:21:11.886 }' 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.886 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 [2024-10-08 16:29:05.576931] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 [2024-10-08 16:29:05.676461] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.453 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.453 "name": "raid_bdev1", 00:21:12.453 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:12.453 "strip_size_kb": 0, 00:21:12.453 "state": "online", 00:21:12.453 "raid_level": "raid1", 00:21:12.453 "superblock": true, 00:21:12.453 "num_base_bdevs": 2, 00:21:12.453 "num_base_bdevs_discovered": 1, 00:21:12.453 "num_base_bdevs_operational": 1, 00:21:12.453 "base_bdevs_list": [ 00:21:12.453 { 00:21:12.453 "name": null, 00:21:12.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.453 "is_configured": false, 00:21:12.453 "data_offset": 0, 00:21:12.454 "data_size": 7936 00:21:12.454 }, 00:21:12.454 { 00:21:12.454 "name": "BaseBdev2", 00:21:12.454 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:12.454 "is_configured": true, 00:21:12.454 "data_offset": 256, 00:21:12.454 "data_size": 7936 00:21:12.454 } 00:21:12.454 ] 00:21:12.454 }' 00:21:12.454 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.454 16:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.068 16:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:13.068 16:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.068 16:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.068 [2024-10-08 16:29:06.208724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.068 [2024-10-08 16:29:06.226720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:13.068 16:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.068 16:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:13.068 [2024-10-08 16:29:06.229909] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.004 "name": "raid_bdev1", 00:21:14.004 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:14.004 "strip_size_kb": 0, 00:21:14.004 "state": "online", 00:21:14.004 "raid_level": "raid1", 00:21:14.004 "superblock": true, 00:21:14.004 "num_base_bdevs": 2, 00:21:14.004 "num_base_bdevs_discovered": 2, 00:21:14.004 "num_base_bdevs_operational": 2, 00:21:14.004 "process": { 00:21:14.004 "type": "rebuild", 00:21:14.004 "target": "spare", 00:21:14.004 "progress": { 00:21:14.004 "blocks": 2560, 00:21:14.004 "percent": 32 00:21:14.004 } 00:21:14.004 }, 00:21:14.004 "base_bdevs_list": [ 00:21:14.004 { 00:21:14.004 "name": "spare", 00:21:14.004 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:14.004 "is_configured": true, 00:21:14.004 "data_offset": 256, 00:21:14.004 "data_size": 7936 00:21:14.004 }, 00:21:14.004 { 00:21:14.004 "name": "BaseBdev2", 00:21:14.004 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:14.004 "is_configured": true, 00:21:14.004 "data_offset": 256, 00:21:14.004 "data_size": 7936 00:21:14.004 } 00:21:14.004 ] 00:21:14.004 }' 00:21:14.004 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.263 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.263 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.264 [2024-10-08 16:29:07.403986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:14.264 [2024-10-08 16:29:07.441886] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:14.264 [2024-10-08 16:29:07.442176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.264 [2024-10-08 16:29:07.442366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:14.264 [2024-10-08 16:29:07.442503] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.264 "name": "raid_bdev1", 00:21:14.264 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:14.264 "strip_size_kb": 0, 00:21:14.264 "state": "online", 00:21:14.264 "raid_level": "raid1", 00:21:14.264 "superblock": true, 00:21:14.264 "num_base_bdevs": 2, 00:21:14.264 "num_base_bdevs_discovered": 1, 00:21:14.264 "num_base_bdevs_operational": 1, 00:21:14.264 "base_bdevs_list": [ 00:21:14.264 { 00:21:14.264 "name": null, 00:21:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.264 "is_configured": false, 00:21:14.264 "data_offset": 0, 00:21:14.264 "data_size": 7936 00:21:14.264 }, 00:21:14.264 { 00:21:14.264 "name": "BaseBdev2", 00:21:14.264 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:14.264 "is_configured": true, 00:21:14.264 "data_offset": 256, 00:21:14.264 "data_size": 7936 00:21:14.264 } 00:21:14.264 ] 00:21:14.264 }' 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.264 16:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.830 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.830 "name": "raid_bdev1", 00:21:14.830 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:14.830 "strip_size_kb": 0, 00:21:14.830 "state": "online", 00:21:14.830 "raid_level": "raid1", 00:21:14.830 "superblock": true, 00:21:14.830 "num_base_bdevs": 2, 00:21:14.830 "num_base_bdevs_discovered": 1, 00:21:14.830 "num_base_bdevs_operational": 1, 00:21:14.830 "base_bdevs_list": [ 00:21:14.830 { 00:21:14.830 "name": null, 00:21:14.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.830 "is_configured": false, 00:21:14.830 "data_offset": 0, 00:21:14.830 "data_size": 7936 00:21:14.830 }, 00:21:14.830 { 00:21:14.830 "name": "BaseBdev2", 00:21:14.830 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:14.830 "is_configured": true, 00:21:14.830 "data_offset": 256, 00:21:14.830 "data_size": 7936 00:21:14.830 } 00:21:14.830 ] 00:21:14.830 }' 00:21:14.831 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.831 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.831 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.090 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:15.090 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:15.090 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.090 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.090 [2024-10-08 16:29:08.169869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.090 [2024-10-08 16:29:08.186684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:15.090 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.090 16:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:15.090 [2024-10-08 16:29:08.189459] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.027 "name": "raid_bdev1", 00:21:16.027 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:16.027 "strip_size_kb": 0, 00:21:16.027 "state": "online", 00:21:16.027 "raid_level": "raid1", 00:21:16.027 "superblock": true, 00:21:16.027 "num_base_bdevs": 2, 00:21:16.027 "num_base_bdevs_discovered": 2, 00:21:16.027 "num_base_bdevs_operational": 2, 00:21:16.027 "process": { 00:21:16.027 "type": "rebuild", 00:21:16.027 "target": "spare", 00:21:16.027 "progress": { 00:21:16.027 "blocks": 2560, 00:21:16.027 "percent": 32 00:21:16.027 } 00:21:16.027 }, 00:21:16.027 "base_bdevs_list": [ 00:21:16.027 { 00:21:16.027 "name": "spare", 00:21:16.027 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:16.027 "is_configured": true, 00:21:16.027 "data_offset": 256, 00:21:16.027 "data_size": 7936 00:21:16.027 }, 00:21:16.027 { 00:21:16.027 "name": "BaseBdev2", 00:21:16.027 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:16.027 "is_configured": true, 00:21:16.027 "data_offset": 256, 00:21:16.027 "data_size": 7936 00:21:16.027 } 00:21:16.027 ] 00:21:16.027 }' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:16.027 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=819 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.027 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.287 "name": "raid_bdev1", 00:21:16.287 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:16.287 "strip_size_kb": 0, 00:21:16.287 "state": "online", 00:21:16.287 "raid_level": "raid1", 00:21:16.287 "superblock": true, 00:21:16.287 "num_base_bdevs": 2, 00:21:16.287 "num_base_bdevs_discovered": 2, 00:21:16.287 "num_base_bdevs_operational": 2, 00:21:16.287 "process": { 00:21:16.287 "type": "rebuild", 00:21:16.287 "target": "spare", 00:21:16.287 "progress": { 00:21:16.287 "blocks": 2816, 00:21:16.287 "percent": 35 00:21:16.287 } 00:21:16.287 }, 00:21:16.287 "base_bdevs_list": [ 00:21:16.287 { 00:21:16.287 "name": "spare", 00:21:16.287 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:16.287 "is_configured": true, 00:21:16.287 "data_offset": 256, 00:21:16.287 "data_size": 7936 00:21:16.287 }, 00:21:16.287 { 00:21:16.287 "name": "BaseBdev2", 00:21:16.287 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:16.287 "is_configured": true, 00:21:16.287 "data_offset": 256, 00:21:16.287 "data_size": 7936 00:21:16.287 } 00:21:16.287 ] 00:21:16.287 }' 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.287 16:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.225 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.484 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.484 "name": "raid_bdev1", 00:21:17.484 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:17.484 "strip_size_kb": 0, 00:21:17.484 "state": "online", 00:21:17.484 "raid_level": "raid1", 00:21:17.484 "superblock": true, 00:21:17.484 "num_base_bdevs": 2, 00:21:17.484 "num_base_bdevs_discovered": 2, 00:21:17.484 "num_base_bdevs_operational": 2, 00:21:17.484 "process": { 00:21:17.484 "type": "rebuild", 00:21:17.484 "target": "spare", 00:21:17.484 "progress": { 00:21:17.484 "blocks": 5632, 00:21:17.484 "percent": 70 00:21:17.484 } 00:21:17.484 }, 00:21:17.484 "base_bdevs_list": [ 00:21:17.484 { 00:21:17.484 "name": "spare", 00:21:17.484 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:17.484 "is_configured": true, 00:21:17.484 "data_offset": 256, 00:21:17.484 "data_size": 7936 00:21:17.484 }, 00:21:17.484 { 00:21:17.484 "name": "BaseBdev2", 00:21:17.484 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:17.484 "is_configured": true, 00:21:17.484 "data_offset": 256, 00:21:17.484 "data_size": 7936 00:21:17.484 } 00:21:17.484 ] 00:21:17.484 }' 00:21:17.484 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.484 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.484 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.484 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.484 16:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.051 [2024-10-08 16:29:11.320568] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:18.051 [2024-10-08 16:29:11.320929] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:18.051 [2024-10-08 16:29:11.321115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.354 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.612 "name": "raid_bdev1", 00:21:18.612 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:18.612 "strip_size_kb": 0, 00:21:18.612 "state": "online", 00:21:18.612 "raid_level": "raid1", 00:21:18.612 "superblock": true, 00:21:18.612 "num_base_bdevs": 2, 00:21:18.612 "num_base_bdevs_discovered": 2, 00:21:18.612 "num_base_bdevs_operational": 2, 00:21:18.612 "base_bdevs_list": [ 00:21:18.612 { 00:21:18.612 "name": "spare", 00:21:18.612 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:18.612 "is_configured": true, 00:21:18.612 "data_offset": 256, 00:21:18.612 "data_size": 7936 00:21:18.612 }, 00:21:18.612 { 00:21:18.612 "name": "BaseBdev2", 00:21:18.612 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:18.612 "is_configured": true, 00:21:18.612 "data_offset": 256, 00:21:18.612 "data_size": 7936 00:21:18.612 } 00:21:18.612 ] 00:21:18.612 }' 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.612 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.612 "name": "raid_bdev1", 00:21:18.612 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:18.612 "strip_size_kb": 0, 00:21:18.612 "state": "online", 00:21:18.612 "raid_level": "raid1", 00:21:18.612 "superblock": true, 00:21:18.612 "num_base_bdevs": 2, 00:21:18.612 "num_base_bdevs_discovered": 2, 00:21:18.612 "num_base_bdevs_operational": 2, 00:21:18.613 "base_bdevs_list": [ 00:21:18.613 { 00:21:18.613 "name": "spare", 00:21:18.613 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:18.613 "is_configured": true, 00:21:18.613 "data_offset": 256, 00:21:18.613 "data_size": 7936 00:21:18.613 }, 00:21:18.613 { 00:21:18.613 "name": "BaseBdev2", 00:21:18.613 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:18.613 "is_configured": true, 00:21:18.613 "data_offset": 256, 00:21:18.613 "data_size": 7936 00:21:18.613 } 00:21:18.613 ] 00:21:18.613 }' 00:21:18.613 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.613 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:18.613 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.871 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.872 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.872 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.872 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.872 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.872 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.872 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.872 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.872 "name": "raid_bdev1", 00:21:18.872 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:18.872 "strip_size_kb": 0, 00:21:18.872 "state": "online", 00:21:18.872 "raid_level": "raid1", 00:21:18.872 "superblock": true, 00:21:18.872 "num_base_bdevs": 2, 00:21:18.872 "num_base_bdevs_discovered": 2, 00:21:18.872 "num_base_bdevs_operational": 2, 00:21:18.872 "base_bdevs_list": [ 00:21:18.872 { 00:21:18.872 "name": "spare", 00:21:18.872 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:18.872 "is_configured": true, 00:21:18.872 "data_offset": 256, 00:21:18.872 "data_size": 7936 00:21:18.872 }, 00:21:18.872 { 00:21:18.872 "name": "BaseBdev2", 00:21:18.872 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:18.872 "is_configured": true, 00:21:18.872 "data_offset": 256, 00:21:18.872 "data_size": 7936 00:21:18.872 } 00:21:18.872 ] 00:21:18.872 }' 00:21:18.872 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.872 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.439 [2024-10-08 16:29:12.502807] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.439 [2024-10-08 16:29:12.502853] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.439 [2024-10-08 16:29:12.502976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.439 [2024-10-08 16:29:12.503078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.439 [2024-10-08 16:29:12.503096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.439 [2024-10-08 16:29:12.574777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:19.439 [2024-10-08 16:29:12.574849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.439 [2024-10-08 16:29:12.574882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:19.439 [2024-10-08 16:29:12.574897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.439 [2024-10-08 16:29:12.577887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.439 [2024-10-08 16:29:12.577952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:19.439 [2024-10-08 16:29:12.578083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:19.439 [2024-10-08 16:29:12.578155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:19.439 [2024-10-08 16:29:12.578325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:19.439 spare 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.439 [2024-10-08 16:29:12.678488] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:19.439 [2024-10-08 16:29:12.678582] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:19.439 [2024-10-08 16:29:12.678802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:19.439 [2024-10-08 16:29:12.678945] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:19.439 [2024-10-08 16:29:12.678970] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:19.439 [2024-10-08 16:29:12.679143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.439 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.440 "name": "raid_bdev1", 00:21:19.440 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:19.440 "strip_size_kb": 0, 00:21:19.440 "state": "online", 00:21:19.440 "raid_level": "raid1", 00:21:19.440 "superblock": true, 00:21:19.440 "num_base_bdevs": 2, 00:21:19.440 "num_base_bdevs_discovered": 2, 00:21:19.440 "num_base_bdevs_operational": 2, 00:21:19.440 "base_bdevs_list": [ 00:21:19.440 { 00:21:19.440 "name": "spare", 00:21:19.440 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:19.440 "is_configured": true, 00:21:19.440 "data_offset": 256, 00:21:19.440 "data_size": 7936 00:21:19.440 }, 00:21:19.440 { 00:21:19.440 "name": "BaseBdev2", 00:21:19.440 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:19.440 "is_configured": true, 00:21:19.440 "data_offset": 256, 00:21:19.440 "data_size": 7936 00:21:19.440 } 00:21:19.440 ] 00:21:19.440 }' 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.440 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.007 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.008 "name": "raid_bdev1", 00:21:20.008 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:20.008 "strip_size_kb": 0, 00:21:20.008 "state": "online", 00:21:20.008 "raid_level": "raid1", 00:21:20.008 "superblock": true, 00:21:20.008 "num_base_bdevs": 2, 00:21:20.008 "num_base_bdevs_discovered": 2, 00:21:20.008 "num_base_bdevs_operational": 2, 00:21:20.008 "base_bdevs_list": [ 00:21:20.008 { 00:21:20.008 "name": "spare", 00:21:20.008 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:20.008 "is_configured": true, 00:21:20.008 "data_offset": 256, 00:21:20.008 "data_size": 7936 00:21:20.008 }, 00:21:20.008 { 00:21:20.008 "name": "BaseBdev2", 00:21:20.008 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:20.008 "is_configured": true, 00:21:20.008 "data_offset": 256, 00:21:20.008 "data_size": 7936 00:21:20.008 } 00:21:20.008 ] 00:21:20.008 }' 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:20.008 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.266 [2024-10-08 16:29:13.419436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.266 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.267 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.267 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.267 "name": "raid_bdev1", 00:21:20.267 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:20.267 "strip_size_kb": 0, 00:21:20.267 "state": "online", 00:21:20.267 "raid_level": "raid1", 00:21:20.267 "superblock": true, 00:21:20.267 "num_base_bdevs": 2, 00:21:20.267 "num_base_bdevs_discovered": 1, 00:21:20.267 "num_base_bdevs_operational": 1, 00:21:20.267 "base_bdevs_list": [ 00:21:20.267 { 00:21:20.267 "name": null, 00:21:20.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.267 "is_configured": false, 00:21:20.267 "data_offset": 0, 00:21:20.267 "data_size": 7936 00:21:20.267 }, 00:21:20.267 { 00:21:20.267 "name": "BaseBdev2", 00:21:20.267 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:20.267 "is_configured": true, 00:21:20.267 "data_offset": 256, 00:21:20.267 "data_size": 7936 00:21:20.267 } 00:21:20.267 ] 00:21:20.267 }' 00:21:20.267 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.267 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.834 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.834 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.834 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.834 [2024-10-08 16:29:13.935702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.834 [2024-10-08 16:29:13.936048] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.834 [2024-10-08 16:29:13.936093] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:20.834 [2024-10-08 16:29:13.936139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.834 [2024-10-08 16:29:13.953229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:20.834 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.834 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:20.834 [2024-10-08 16:29:13.956401] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.770 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.771 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.771 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.771 "name": "raid_bdev1", 00:21:21.771 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:21.771 "strip_size_kb": 0, 00:21:21.771 "state": "online", 00:21:21.771 "raid_level": "raid1", 00:21:21.771 "superblock": true, 00:21:21.771 "num_base_bdevs": 2, 00:21:21.771 "num_base_bdevs_discovered": 2, 00:21:21.771 "num_base_bdevs_operational": 2, 00:21:21.771 "process": { 00:21:21.771 "type": "rebuild", 00:21:21.771 "target": "spare", 00:21:21.771 "progress": { 00:21:21.771 "blocks": 2560, 00:21:21.771 "percent": 32 00:21:21.771 } 00:21:21.771 }, 00:21:21.771 "base_bdevs_list": [ 00:21:21.771 { 00:21:21.771 "name": "spare", 00:21:21.771 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:21.771 "is_configured": true, 00:21:21.771 "data_offset": 256, 00:21:21.771 "data_size": 7936 00:21:21.771 }, 00:21:21.771 { 00:21:21.771 "name": "BaseBdev2", 00:21:21.771 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:21.771 "is_configured": true, 00:21:21.771 "data_offset": 256, 00:21:21.771 "data_size": 7936 00:21:21.771 } 00:21:21.771 ] 00:21:21.771 }' 00:21:21.771 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.771 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.771 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.029 [2024-10-08 16:29:15.127070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:22.029 [2024-10-08 16:29:15.168717] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:22.029 [2024-10-08 16:29:15.168801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.029 [2024-10-08 16:29:15.168824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:22.029 [2024-10-08 16:29:15.168841] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.029 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.030 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.030 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.030 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.030 "name": "raid_bdev1", 00:21:22.030 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:22.030 "strip_size_kb": 0, 00:21:22.030 "state": "online", 00:21:22.030 "raid_level": "raid1", 00:21:22.030 "superblock": true, 00:21:22.030 "num_base_bdevs": 2, 00:21:22.030 "num_base_bdevs_discovered": 1, 00:21:22.030 "num_base_bdevs_operational": 1, 00:21:22.030 "base_bdevs_list": [ 00:21:22.030 { 00:21:22.030 "name": null, 00:21:22.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.030 "is_configured": false, 00:21:22.030 "data_offset": 0, 00:21:22.030 "data_size": 7936 00:21:22.030 }, 00:21:22.030 { 00:21:22.030 "name": "BaseBdev2", 00:21:22.030 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:22.030 "is_configured": true, 00:21:22.030 "data_offset": 256, 00:21:22.030 "data_size": 7936 00:21:22.030 } 00:21:22.030 ] 00:21:22.030 }' 00:21:22.030 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.030 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.597 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:22.597 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.597 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.597 [2024-10-08 16:29:15.729025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.597 [2024-10-08 16:29:15.729148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.597 [2024-10-08 16:29:15.729180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:22.597 [2024-10-08 16:29:15.729198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.597 [2024-10-08 16:29:15.729560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.597 [2024-10-08 16:29:15.729617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:22.597 [2024-10-08 16:29:15.729704] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:22.597 [2024-10-08 16:29:15.729728] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:22.597 [2024-10-08 16:29:15.729750] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:22.597 [2024-10-08 16:29:15.729783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.597 [2024-10-08 16:29:15.745923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:22.597 spare 00:21:22.597 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.597 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:22.597 [2024-10-08 16:29:15.748957] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.532 "name": "raid_bdev1", 00:21:23.532 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:23.532 "strip_size_kb": 0, 00:21:23.532 "state": "online", 00:21:23.532 "raid_level": "raid1", 00:21:23.532 "superblock": true, 00:21:23.532 "num_base_bdevs": 2, 00:21:23.532 "num_base_bdevs_discovered": 2, 00:21:23.532 "num_base_bdevs_operational": 2, 00:21:23.532 "process": { 00:21:23.532 "type": "rebuild", 00:21:23.532 "target": "spare", 00:21:23.532 "progress": { 00:21:23.532 "blocks": 2560, 00:21:23.532 "percent": 32 00:21:23.532 } 00:21:23.532 }, 00:21:23.532 "base_bdevs_list": [ 00:21:23.532 { 00:21:23.532 "name": "spare", 00:21:23.532 "uuid": "9ed33789-0d6d-5295-aa68-25aa92febcbb", 00:21:23.532 "is_configured": true, 00:21:23.532 "data_offset": 256, 00:21:23.532 "data_size": 7936 00:21:23.532 }, 00:21:23.532 { 00:21:23.532 "name": "BaseBdev2", 00:21:23.532 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:23.532 "is_configured": true, 00:21:23.532 "data_offset": 256, 00:21:23.532 "data_size": 7936 00:21:23.532 } 00:21:23.532 ] 00:21:23.532 }' 00:21:23.532 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.791 [2024-10-08 16:29:16.930661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.791 [2024-10-08 16:29:16.960242] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.791 [2024-10-08 16:29:16.960320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.791 [2024-10-08 16:29:16.960349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.791 [2024-10-08 16:29:16.960381] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.791 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.792 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.792 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.792 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.792 "name": "raid_bdev1", 00:21:23.792 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:23.792 "strip_size_kb": 0, 00:21:23.792 "state": "online", 00:21:23.792 "raid_level": "raid1", 00:21:23.792 "superblock": true, 00:21:23.792 "num_base_bdevs": 2, 00:21:23.792 "num_base_bdevs_discovered": 1, 00:21:23.792 "num_base_bdevs_operational": 1, 00:21:23.792 "base_bdevs_list": [ 00:21:23.792 { 00:21:23.792 "name": null, 00:21:23.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.792 "is_configured": false, 00:21:23.792 "data_offset": 0, 00:21:23.792 "data_size": 7936 00:21:23.792 }, 00:21:23.792 { 00:21:23.792 "name": "BaseBdev2", 00:21:23.792 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:23.792 "is_configured": true, 00:21:23.792 "data_offset": 256, 00:21:23.792 "data_size": 7936 00:21:23.792 } 00:21:23.792 ] 00:21:23.792 }' 00:21:23.792 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.792 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.359 "name": "raid_bdev1", 00:21:24.359 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:24.359 "strip_size_kb": 0, 00:21:24.359 "state": "online", 00:21:24.359 "raid_level": "raid1", 00:21:24.359 "superblock": true, 00:21:24.359 "num_base_bdevs": 2, 00:21:24.359 "num_base_bdevs_discovered": 1, 00:21:24.359 "num_base_bdevs_operational": 1, 00:21:24.359 "base_bdevs_list": [ 00:21:24.359 { 00:21:24.359 "name": null, 00:21:24.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.359 "is_configured": false, 00:21:24.359 "data_offset": 0, 00:21:24.359 "data_size": 7936 00:21:24.359 }, 00:21:24.359 { 00:21:24.359 "name": "BaseBdev2", 00:21:24.359 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:24.359 "is_configured": true, 00:21:24.359 "data_offset": 256, 00:21:24.359 "data_size": 7936 00:21:24.359 } 00:21:24.359 ] 00:21:24.359 }' 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:24.359 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.617 [2024-10-08 16:29:17.713093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:24.617 [2024-10-08 16:29:17.713196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.617 [2024-10-08 16:29:17.713249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:24.617 [2024-10-08 16:29:17.713264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.617 [2024-10-08 16:29:17.713520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.617 [2024-10-08 16:29:17.713543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.617 [2024-10-08 16:29:17.713633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:24.617 [2024-10-08 16:29:17.713656] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:24.617 [2024-10-08 16:29:17.713671] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:24.617 [2024-10-08 16:29:17.713686] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:24.617 BaseBdev1 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.617 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.564 "name": "raid_bdev1", 00:21:25.564 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:25.564 "strip_size_kb": 0, 00:21:25.564 "state": "online", 00:21:25.564 "raid_level": "raid1", 00:21:25.564 "superblock": true, 00:21:25.564 "num_base_bdevs": 2, 00:21:25.564 "num_base_bdevs_discovered": 1, 00:21:25.564 "num_base_bdevs_operational": 1, 00:21:25.564 "base_bdevs_list": [ 00:21:25.564 { 00:21:25.564 "name": null, 00:21:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.564 "is_configured": false, 00:21:25.564 "data_offset": 0, 00:21:25.564 "data_size": 7936 00:21:25.564 }, 00:21:25.564 { 00:21:25.564 "name": "BaseBdev2", 00:21:25.564 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:25.564 "is_configured": true, 00:21:25.564 "data_offset": 256, 00:21:25.564 "data_size": 7936 00:21:25.564 } 00:21:25.564 ] 00:21:25.564 }' 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.564 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:26.131 "name": "raid_bdev1", 00:21:26.131 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:26.131 "strip_size_kb": 0, 00:21:26.131 "state": "online", 00:21:26.131 "raid_level": "raid1", 00:21:26.131 "superblock": true, 00:21:26.131 "num_base_bdevs": 2, 00:21:26.131 "num_base_bdevs_discovered": 1, 00:21:26.131 "num_base_bdevs_operational": 1, 00:21:26.131 "base_bdevs_list": [ 00:21:26.131 { 00:21:26.131 "name": null, 00:21:26.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.131 "is_configured": false, 00:21:26.131 "data_offset": 0, 00:21:26.131 "data_size": 7936 00:21:26.131 }, 00:21:26.131 { 00:21:26.131 "name": "BaseBdev2", 00:21:26.131 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:26.131 "is_configured": true, 00:21:26.131 "data_offset": 256, 00:21:26.131 "data_size": 7936 00:21:26.131 } 00:21:26.131 ] 00:21:26.131 }' 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.131 [2024-10-08 16:29:19.421891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.131 [2024-10-08 16:29:19.422263] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:26.131 [2024-10-08 16:29:19.422303] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:26.131 request: 00:21:26.131 { 00:21:26.131 "base_bdev": "BaseBdev1", 00:21:26.131 "raid_bdev": "raid_bdev1", 00:21:26.131 "method": "bdev_raid_add_base_bdev", 00:21:26.131 "req_id": 1 00:21:26.131 } 00:21:26.131 Got JSON-RPC error response 00:21:26.131 response: 00:21:26.131 { 00:21:26.131 "code": -22, 00:21:26.131 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:26.131 } 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:26.131 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.509 "name": "raid_bdev1", 00:21:27.509 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:27.509 "strip_size_kb": 0, 00:21:27.509 "state": "online", 00:21:27.509 "raid_level": "raid1", 00:21:27.509 "superblock": true, 00:21:27.509 "num_base_bdevs": 2, 00:21:27.509 "num_base_bdevs_discovered": 1, 00:21:27.509 "num_base_bdevs_operational": 1, 00:21:27.509 "base_bdevs_list": [ 00:21:27.509 { 00:21:27.509 "name": null, 00:21:27.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.509 "is_configured": false, 00:21:27.509 "data_offset": 0, 00:21:27.509 "data_size": 7936 00:21:27.509 }, 00:21:27.509 { 00:21:27.509 "name": "BaseBdev2", 00:21:27.509 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:27.509 "is_configured": true, 00:21:27.509 "data_offset": 256, 00:21:27.509 "data_size": 7936 00:21:27.509 } 00:21:27.509 ] 00:21:27.509 }' 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.509 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.768 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.768 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.768 "name": "raid_bdev1", 00:21:27.768 "uuid": "ef95a44a-2414-4a10-9dd9-5efd99ad8bf3", 00:21:27.768 "strip_size_kb": 0, 00:21:27.768 "state": "online", 00:21:27.768 "raid_level": "raid1", 00:21:27.768 "superblock": true, 00:21:27.768 "num_base_bdevs": 2, 00:21:27.768 "num_base_bdevs_discovered": 1, 00:21:27.768 "num_base_bdevs_operational": 1, 00:21:27.768 "base_bdevs_list": [ 00:21:27.768 { 00:21:27.768 "name": null, 00:21:27.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.768 "is_configured": false, 00:21:27.768 "data_offset": 0, 00:21:27.768 "data_size": 7936 00:21:27.768 }, 00:21:27.768 { 00:21:27.768 "name": "BaseBdev2", 00:21:27.768 "uuid": "78aefff9-ac6b-5312-bd05-8d9848a95d77", 00:21:27.768 "is_configured": true, 00:21:27.768 "data_offset": 256, 00:21:27.768 "data_size": 7936 00:21:27.768 } 00:21:27.768 ] 00:21:27.768 }' 00:21:27.768 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.768 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:27.768 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89891 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89891 ']' 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89891 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89891 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.027 killing process with pid 89891 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89891' 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89891 00:21:28.027 Received shutdown signal, test time was about 60.000000 seconds 00:21:28.027 00:21:28.027 Latency(us) 00:21:28.027 [2024-10-08T16:29:21.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.027 [2024-10-08T16:29:21.349Z] =================================================================================================================== 00:21:28.027 [2024-10-08T16:29:21.349Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.027 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89891 00:21:28.027 [2024-10-08 16:29:21.147332] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.027 [2024-10-08 16:29:21.147547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.027 [2024-10-08 16:29:21.147647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.027 [2024-10-08 16:29:21.147678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:28.285 [2024-10-08 16:29:21.413267] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.659 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:29.659 00:21:29.659 real 0m18.822s 00:21:29.659 user 0m25.507s 00:21:29.659 sys 0m1.616s 00:21:29.659 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.659 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.659 ************************************ 00:21:29.659 END TEST raid_rebuild_test_sb_md_interleaved 00:21:29.659 ************************************ 00:21:29.659 16:29:22 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:29.659 16:29:22 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:29.659 16:29:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89891 ']' 00:21:29.659 16:29:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89891 00:21:29.659 16:29:22 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:29.659 00:21:29.659 real 13m22.107s 00:21:29.659 user 18m38.212s 00:21:29.659 sys 1m53.791s 00:21:29.659 16:29:22 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.659 ************************************ 00:21:29.659 END TEST bdev_raid 00:21:29.659 16:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.659 ************************************ 00:21:29.659 16:29:22 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:29.659 16:29:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:29.659 16:29:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.659 16:29:22 -- common/autotest_common.sh@10 -- # set +x 00:21:29.659 ************************************ 00:21:29.659 START TEST spdkcli_raid 00:21:29.659 ************************************ 00:21:29.659 16:29:22 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:29.659 * Looking for test storage... 00:21:29.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:29.659 16:29:22 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:29.659 16:29:22 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:21:29.659 16:29:22 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:29.659 16:29:22 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:29.659 16:29:22 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:29.660 16:29:22 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.660 --rc genhtml_branch_coverage=1 00:21:29.660 --rc genhtml_function_coverage=1 00:21:29.660 --rc genhtml_legend=1 00:21:29.660 --rc geninfo_all_blocks=1 00:21:29.660 --rc geninfo_unexecuted_blocks=1 00:21:29.660 00:21:29.660 ' 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.660 --rc genhtml_branch_coverage=1 00:21:29.660 --rc genhtml_function_coverage=1 00:21:29.660 --rc genhtml_legend=1 00:21:29.660 --rc geninfo_all_blocks=1 00:21:29.660 --rc geninfo_unexecuted_blocks=1 00:21:29.660 00:21:29.660 ' 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.660 --rc genhtml_branch_coverage=1 00:21:29.660 --rc genhtml_function_coverage=1 00:21:29.660 --rc genhtml_legend=1 00:21:29.660 --rc geninfo_all_blocks=1 00:21:29.660 --rc geninfo_unexecuted_blocks=1 00:21:29.660 00:21:29.660 ' 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.660 --rc genhtml_branch_coverage=1 00:21:29.660 --rc genhtml_function_coverage=1 00:21:29.660 --rc genhtml_legend=1 00:21:29.660 --rc geninfo_all_blocks=1 00:21:29.660 --rc geninfo_unexecuted_blocks=1 00:21:29.660 00:21:29.660 ' 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:29.660 16:29:22 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90574 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90574 00:21:29.660 16:29:22 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90574 ']' 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.660 16:29:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.919 [2024-10-08 16:29:23.117849] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:29.919 [2024-10-08 16:29:23.118067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90574 ] 00:21:30.177 [2024-10-08 16:29:23.298805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:30.436 [2024-10-08 16:29:23.561784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.436 [2024-10-08 16:29:23.561812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.370 16:29:24 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.370 16:29:24 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:21:31.370 16:29:24 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:31.370 16:29:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.370 16:29:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.370 16:29:24 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:31.370 16:29:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:31.370 16:29:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.370 16:29:24 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:31.370 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:31.370 ' 00:21:33.272 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:33.272 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:33.272 16:29:26 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:33.272 16:29:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.272 16:29:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.272 16:29:26 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:33.272 16:29:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.272 16:29:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.272 16:29:26 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:33.272 ' 00:21:34.207 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:34.207 16:29:27 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:34.207 16:29:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.207 16:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.466 16:29:27 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:34.466 16:29:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.466 16:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.466 16:29:27 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:34.466 16:29:27 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:35.033 16:29:28 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:35.033 16:29:28 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:35.033 16:29:28 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:35.033 16:29:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.033 16:29:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.033 16:29:28 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:35.033 16:29:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.033 16:29:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.033 16:29:28 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:35.033 ' 00:21:35.966 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:36.225 16:29:29 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:36.225 16:29:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.225 16:29:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.225 16:29:29 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:36.225 16:29:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:36.225 16:29:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.225 16:29:29 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:36.225 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:36.225 ' 00:21:37.600 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:37.600 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:37.859 16:29:30 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.859 16:29:30 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90574 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90574 ']' 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90574 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.859 16:29:30 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90574 00:21:37.859 16:29:31 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.859 16:29:31 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.859 killing process with pid 90574 00:21:37.859 16:29:31 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90574' 00:21:37.859 16:29:31 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90574 00:21:37.859 16:29:31 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90574 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90574 ']' 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90574 00:21:40.390 16:29:33 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90574 ']' 00:21:40.390 16:29:33 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90574 00:21:40.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90574) - No such process 00:21:40.390 Process with pid 90574 is not found 00:21:40.390 16:29:33 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90574 is not found' 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:40.390 16:29:33 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:40.390 00:21:40.390 real 0m10.570s 00:21:40.390 user 0m21.545s 00:21:40.390 sys 0m1.273s 00:21:40.390 16:29:33 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.390 16:29:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:40.390 ************************************ 00:21:40.390 END TEST spdkcli_raid 00:21:40.390 ************************************ 00:21:40.390 16:29:33 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:40.390 16:29:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:40.390 16:29:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.390 16:29:33 -- common/autotest_common.sh@10 -- # set +x 00:21:40.390 ************************************ 00:21:40.390 START TEST blockdev_raid5f 00:21:40.390 ************************************ 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:40.390 * Looking for test storage... 00:21:40.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.390 16:29:33 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.390 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:40.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.390 --rc genhtml_branch_coverage=1 00:21:40.390 --rc genhtml_function_coverage=1 00:21:40.390 --rc genhtml_legend=1 00:21:40.390 --rc geninfo_all_blocks=1 00:21:40.391 --rc geninfo_unexecuted_blocks=1 00:21:40.391 00:21:40.391 ' 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:40.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.391 --rc genhtml_branch_coverage=1 00:21:40.391 --rc genhtml_function_coverage=1 00:21:40.391 --rc genhtml_legend=1 00:21:40.391 --rc geninfo_all_blocks=1 00:21:40.391 --rc geninfo_unexecuted_blocks=1 00:21:40.391 00:21:40.391 ' 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:40.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.391 --rc genhtml_branch_coverage=1 00:21:40.391 --rc genhtml_function_coverage=1 00:21:40.391 --rc genhtml_legend=1 00:21:40.391 --rc geninfo_all_blocks=1 00:21:40.391 --rc geninfo_unexecuted_blocks=1 00:21:40.391 00:21:40.391 ' 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:40.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.391 --rc genhtml_branch_coverage=1 00:21:40.391 --rc genhtml_function_coverage=1 00:21:40.391 --rc genhtml_legend=1 00:21:40.391 --rc geninfo_all_blocks=1 00:21:40.391 --rc geninfo_unexecuted_blocks=1 00:21:40.391 00:21:40.391 ' 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90854 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:40.391 16:29:33 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90854 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90854 ']' 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.391 16:29:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.650 [2024-10-08 16:29:33.712919] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:40.650 [2024-10-08 16:29:33.713423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90854 ] 00:21:40.650 [2024-10-08 16:29:33.891189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.908 [2024-10-08 16:29:34.127870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.843 16:29:35 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.843 16:29:35 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:21:41.843 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:41.843 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:21:41.843 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:41.843 16:29:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.843 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:41.843 Malloc0 00:21:41.843 Malloc1 00:21:41.843 Malloc2 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "700f402d-d50d-42bb-9f93-89f37c6bce3c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "700f402d-d50d-42bb-9f93-89f37c6bce3c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "700f402d-d50d-42bb-9f93-89f37c6bce3c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "58787f05-68eb-49db-8cba-bd3fb7b5e30d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3d7e88fe-578e-4b90-b029-669e81f1f5e6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "060209cd-ec39-45ac-84aa-ca2c612323f1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:42.102 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90854 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90854 ']' 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90854 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90854 00:21:42.102 16:29:35 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.103 16:29:35 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.103 killing process with pid 90854 00:21:42.103 16:29:35 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90854' 00:21:42.103 16:29:35 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90854 00:21:42.103 16:29:35 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90854 00:21:45.388 16:29:38 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:45.388 16:29:38 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:45.388 16:29:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:45.388 16:29:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.388 16:29:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:45.388 ************************************ 00:21:45.388 START TEST bdev_hello_world 00:21:45.388 ************************************ 00:21:45.388 16:29:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:45.388 [2024-10-08 16:29:38.133587] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:45.388 [2024-10-08 16:29:38.133772] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90922 ] 00:21:45.388 [2024-10-08 16:29:38.304755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.388 [2024-10-08 16:29:38.522990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.955 [2024-10-08 16:29:39.015838] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:45.955 [2024-10-08 16:29:39.015912] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:45.955 [2024-10-08 16:29:39.015950] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:45.955 [2024-10-08 16:29:39.016514] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:45.955 [2024-10-08 16:29:39.016691] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:45.955 [2024-10-08 16:29:39.016750] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:45.955 [2024-10-08 16:29:39.016826] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:45.955 00:21:45.955 [2024-10-08 16:29:39.016856] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:47.331 00:21:47.331 real 0m2.342s 00:21:47.331 user 0m1.909s 00:21:47.331 sys 0m0.309s 00:21:47.331 16:29:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.331 16:29:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:47.331 ************************************ 00:21:47.331 END TEST bdev_hello_world 00:21:47.331 ************************************ 00:21:47.331 16:29:40 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:47.331 16:29:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:47.331 16:29:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.331 16:29:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:47.331 ************************************ 00:21:47.331 START TEST bdev_bounds 00:21:47.331 ************************************ 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:21:47.331 Process bdevio pid: 90964 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90964 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90964' 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90964 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90964 ']' 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.331 16:29:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:47.331 [2024-10-08 16:29:40.533573] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:47.331 [2024-10-08 16:29:40.533777] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90964 ] 00:21:47.605 [2024-10-08 16:29:40.713762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.883 [2024-10-08 16:29:40.975956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.883 [2024-10-08 16:29:40.976123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.883 [2024-10-08 16:29:40.976132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.450 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.450 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:21:48.450 16:29:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:48.450 I/O targets: 00:21:48.450 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:48.450 00:21:48.450 00:21:48.450 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.450 http://cunit.sourceforge.net/ 00:21:48.450 00:21:48.450 00:21:48.450 Suite: bdevio tests on: raid5f 00:21:48.450 Test: blockdev write read block ...passed 00:21:48.450 Test: blockdev write zeroes read block ...passed 00:21:48.450 Test: blockdev write zeroes read no split ...passed 00:21:48.709 Test: blockdev write zeroes read split ...passed 00:21:48.709 Test: blockdev write zeroes read split partial ...passed 00:21:48.709 Test: blockdev reset ...passed 00:21:48.709 Test: blockdev write read 8 blocks ...passed 00:21:48.709 Test: blockdev write read size > 128k ...passed 00:21:48.709 Test: blockdev write read invalid size ...passed 00:21:48.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:48.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:48.709 Test: blockdev write read max offset ...passed 00:21:48.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:48.709 Test: blockdev writev readv 8 blocks ...passed 00:21:48.709 Test: blockdev writev readv 30 x 1block ...passed 00:21:48.709 Test: blockdev writev readv block ...passed 00:21:48.709 Test: blockdev writev readv size > 128k ...passed 00:21:48.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:48.709 Test: blockdev comparev and writev ...passed 00:21:48.709 Test: blockdev nvme passthru rw ...passed 00:21:48.709 Test: blockdev nvme passthru vendor specific ...passed 00:21:48.709 Test: blockdev nvme admin passthru ...passed 00:21:48.709 Test: blockdev copy ...passed 00:21:48.709 00:21:48.709 Run Summary: Type Total Ran Passed Failed Inactive 00:21:48.709 suites 1 1 n/a 0 0 00:21:48.709 tests 23 23 23 0 0 00:21:48.709 asserts 130 130 130 0 n/a 00:21:48.709 00:21:48.709 Elapsed time = 0.591 seconds 00:21:48.709 0 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90964 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90964 ']' 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90964 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90964 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90964' 00:21:48.709 killing process with pid 90964 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90964 00:21:48.709 16:29:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90964 00:21:50.085 16:29:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:50.085 00:21:50.085 real 0m2.975s 00:21:50.085 user 0m6.890s 00:21:50.085 sys 0m0.485s 00:21:50.085 16:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:50.085 ************************************ 00:21:50.085 END TEST bdev_bounds 00:21:50.085 ************************************ 00:21:50.085 16:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:50.343 16:29:43 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:50.343 16:29:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:50.343 16:29:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:50.343 16:29:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:50.343 ************************************ 00:21:50.343 START TEST bdev_nbd 00:21:50.343 ************************************ 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91029 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91029 /var/tmp/spdk-nbd.sock 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 91029 ']' 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:50.343 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.344 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:50.344 [2024-10-08 16:29:43.562310] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:50.344 [2024-10-08 16:29:43.562503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.602 [2024-10-08 16:29:43.731134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.861 [2024-10-08 16:29:43.936166] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:51.429 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:51.688 1+0 records in 00:21:51.688 1+0 records out 00:21:51.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340552 s, 12.0 MB/s 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:51.688 16:29:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:51.947 { 00:21:51.947 "nbd_device": "/dev/nbd0", 00:21:51.947 "bdev_name": "raid5f" 00:21:51.947 } 00:21:51.947 ]' 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:51.947 { 00:21:51.947 "nbd_device": "/dev/nbd0", 00:21:51.947 "bdev_name": "raid5f" 00:21:51.947 } 00:21:51.947 ]' 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:51.947 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:51.948 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:51.948 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:51.948 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.222 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:52.502 16:29:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:52.761 /dev/nbd0 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:52.761 1+0 records in 00:21:52.761 1+0 records out 00:21:52.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284359 s, 14.4 MB/s 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.761 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:53.019 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:53.019 { 00:21:53.019 "nbd_device": "/dev/nbd0", 00:21:53.019 "bdev_name": "raid5f" 00:21:53.019 } 00:21:53.019 ]' 00:21:53.019 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:53.019 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:53.019 { 00:21:53.019 "nbd_device": "/dev/nbd0", 00:21:53.019 "bdev_name": "raid5f" 00:21:53.019 } 00:21:53.019 ]' 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:53.278 256+0 records in 00:21:53.278 256+0 records out 00:21:53.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630094 s, 166 MB/s 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:53.278 256+0 records in 00:21:53.278 256+0 records out 00:21:53.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0382653 s, 27.4 MB/s 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:53.278 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.536 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:53.795 16:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:54.052 malloc_lvol_verify 00:21:54.052 16:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:54.310 a8603bcf-c5de-4b00-a642-4c75864fbcde 00:21:54.310 16:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:54.568 a9bca999-3b15-4d41-8fae-d2c2bf8663f8 00:21:54.568 16:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:54.826 /dev/nbd0 00:21:54.826 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:54.826 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:54.826 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:54.826 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:54.826 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:54.826 mke2fs 1.47.0 (5-Feb-2023) 00:21:54.826 Discarding device blocks: 0/4096 done 00:21:54.826 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:54.826 00:21:54.827 Allocating group tables: 0/1 done 00:21:54.827 Writing inode tables: 0/1 done 00:21:54.827 Creating journal (1024 blocks): done 00:21:54.827 Writing superblocks and filesystem accounting information: 0/1 done 00:21:54.827 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:54.827 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91029 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 91029 ']' 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 91029 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91029 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91029' 00:21:55.085 killing process with pid 91029 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 91029 00:21:55.085 16:29:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 91029 00:21:56.987 16:29:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:56.987 00:21:56.987 real 0m6.551s 00:21:56.987 user 0m9.170s 00:21:56.987 sys 0m1.424s 00:21:56.987 16:29:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.987 16:29:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:56.987 ************************************ 00:21:56.987 END TEST bdev_nbd 00:21:56.987 ************************************ 00:21:56.987 16:29:50 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:56.987 16:29:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:21:56.987 16:29:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:21:56.987 16:29:50 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:56.987 16:29:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:56.987 16:29:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.987 16:29:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.987 ************************************ 00:21:56.987 START TEST bdev_fio 00:21:56.987 ************************************ 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:56.987 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:56.987 ************************************ 00:21:56.987 START TEST bdev_fio_rw_verify 00:21:56.987 ************************************ 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:56.987 16:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:57.246 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:57.246 fio-3.35 00:21:57.246 Starting 1 thread 00:22:09.473 00:22:09.473 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91238: Tue Oct 8 16:30:01 2024 00:22:09.473 read: IOPS=8550, BW=33.4MiB/s (35.0MB/s)(334MiB/10001msec) 00:22:09.473 slat (nsec): min=22167, max=96002, avg=27374.15, stdev=6670.50 00:22:09.473 clat (usec): min=15, max=508, avg=182.62, stdev=67.48 00:22:09.473 lat (usec): min=42, max=542, avg=210.00, stdev=68.53 00:22:09.473 clat percentiles (usec): 00:22:09.473 | 50.000th=[ 180], 99.000th=[ 326], 99.900th=[ 375], 99.990th=[ 412], 00:22:09.473 | 99.999th=[ 510] 00:22:09.473 write: IOPS=8994, BW=35.1MiB/s (36.8MB/s)(347MiB/9875msec); 0 zone resets 00:22:09.473 slat (usec): min=12, max=202, avg=24.15, stdev= 7.16 00:22:09.473 clat (usec): min=76, max=1395, avg=434.50, stdev=64.65 00:22:09.473 lat (usec): min=97, max=1598, avg=458.65, stdev=66.43 00:22:09.473 clat percentiles (usec): 00:22:09.473 | 50.000th=[ 437], 99.000th=[ 578], 99.900th=[ 676], 99.990th=[ 1237], 00:22:09.473 | 99.999th=[ 1401] 00:22:09.473 bw ( KiB/s): min=33720, max=38720, per=98.86%, avg=35570.11, stdev=1606.77, samples=19 00:22:09.473 iops : min= 8430, max= 9680, avg=8892.53, stdev=401.69, samples=19 00:22:09.473 lat (usec) : 20=0.01%, 100=6.55%, 250=33.24%, 500=52.80%, 750=7.38% 00:22:09.473 lat (usec) : 1000=0.01% 00:22:09.473 lat (msec) : 2=0.01% 00:22:09.473 cpu : usr=98.45%, sys=0.68%, ctx=31, majf=0, minf=7436 00:22:09.473 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.473 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.473 issued rwts: total=85511,88824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:09.474 00:22:09.474 Run status group 0 (all jobs): 00:22:09.474 READ: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=334MiB (350MB), run=10001-10001msec 00:22:09.474 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=347MiB (364MB), run=9875-9875msec 00:22:09.749 ----------------------------------------------------- 00:22:09.749 Suppressions used: 00:22:09.749 count bytes template 00:22:09.749 1 7 /usr/src/fio/parse.c 00:22:09.749 670 64320 /usr/src/fio/iolog.c 00:22:09.749 1 8 libtcmalloc_minimal.so 00:22:09.749 1 904 libcrypto.so 00:22:09.749 ----------------------------------------------------- 00:22:09.749 00:22:09.749 00:22:09.749 real 0m12.845s 00:22:09.749 user 0m13.220s 00:22:09.749 sys 0m0.863s 00:22:09.749 16:30:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:09.749 16:30:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:09.749 ************************************ 00:22:09.749 END TEST bdev_fio_rw_verify 00:22:09.749 ************************************ 00:22:10.017 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:10.017 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "700f402d-d50d-42bb-9f93-89f37c6bce3c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "700f402d-d50d-42bb-9f93-89f37c6bce3c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "700f402d-d50d-42bb-9f93-89f37c6bce3c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "58787f05-68eb-49db-8cba-bd3fb7b5e30d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3d7e88fe-578e-4b90-b029-669e81f1f5e6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "060209cd-ec39-45ac-84aa-ca2c612323f1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:10.018 /home/vagrant/spdk_repo/spdk 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:10.018 00:22:10.018 real 0m13.090s 00:22:10.018 user 0m13.322s 00:22:10.018 sys 0m0.975s 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.018 16:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:10.018 ************************************ 00:22:10.018 END TEST bdev_fio 00:22:10.018 ************************************ 00:22:10.018 16:30:03 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:10.018 16:30:03 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:10.018 16:30:03 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:22:10.018 16:30:03 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.018 16:30:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:10.018 ************************************ 00:22:10.018 START TEST bdev_verify 00:22:10.018 ************************************ 00:22:10.018 16:30:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:10.018 [2024-10-08 16:30:03.315521] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:10.018 [2024-10-08 16:30:03.315740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91398 ] 00:22:10.277 [2024-10-08 16:30:03.487368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:10.536 [2024-10-08 16:30:03.752499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.536 [2024-10-08 16:30:03.752502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.104 Running I/O for 5 seconds... 00:22:13.415 12418.00 IOPS, 48.51 MiB/s [2024-10-08T16:30:07.673Z] 12480.50 IOPS, 48.75 MiB/s [2024-10-08T16:30:08.633Z] 11522.00 IOPS, 45.01 MiB/s [2024-10-08T16:30:09.568Z] 10956.00 IOPS, 42.80 MiB/s [2024-10-08T16:30:09.568Z] 10667.80 IOPS, 41.67 MiB/s 00:22:16.246 Latency(us) 00:22:16.246 [2024-10-08T16:30:09.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.246 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:16.246 Verification LBA range: start 0x0 length 0x2000 00:22:16.246 raid5f : 5.02 5475.36 21.39 0.00 0.00 35233.53 314.65 33602.09 00:22:16.246 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:16.246 Verification LBA range: start 0x2000 length 0x2000 00:22:16.246 raid5f : 5.02 5154.14 20.13 0.00 0.00 37416.83 318.37 46709.29 00:22:16.246 [2024-10-08T16:30:09.568Z] =================================================================================================================== 00:22:16.246 [2024-10-08T16:30:09.568Z] Total : 10629.50 41.52 0.00 0.00 36292.82 314.65 46709.29 00:22:17.619 00:22:17.619 real 0m7.446s 00:22:17.619 user 0m13.396s 00:22:17.619 sys 0m0.357s 00:22:17.619 16:30:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.619 16:30:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 ************************************ 00:22:17.619 END TEST bdev_verify 00:22:17.619 ************************************ 00:22:17.619 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:17.619 16:30:10 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:22:17.619 16:30:10 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.619 16:30:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 ************************************ 00:22:17.619 START TEST bdev_verify_big_io 00:22:17.619 ************************************ 00:22:17.619 16:30:10 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:17.619 [2024-10-08 16:30:10.824657] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:17.619 [2024-10-08 16:30:10.824915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91498 ] 00:22:17.877 [2024-10-08 16:30:11.001374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:18.135 [2024-10-08 16:30:11.220179] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.135 [2024-10-08 16:30:11.220192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.702 Running I/O for 5 seconds... 00:22:20.585 506.00 IOPS, 31.62 MiB/s [2024-10-08T16:30:14.842Z] 633.00 IOPS, 39.56 MiB/s [2024-10-08T16:30:16.217Z] 592.00 IOPS, 37.00 MiB/s [2024-10-08T16:30:17.151Z] 617.75 IOPS, 38.61 MiB/s [2024-10-08T16:30:17.151Z] 609.20 IOPS, 38.08 MiB/s 00:22:23.829 Latency(us) 00:22:23.829 [2024-10-08T16:30:17.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.829 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:23.829 Verification LBA range: start 0x0 length 0x200 00:22:23.829 raid5f : 5.31 322.71 20.17 0.00 0.00 9771061.03 230.87 427056.41 00:22:23.829 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:23.829 Verification LBA range: start 0x200 length 0x200 00:22:23.829 raid5f : 5.26 313.88 19.62 0.00 0.00 10192707.11 190.84 455653.93 00:22:23.829 [2024-10-08T16:30:17.151Z] =================================================================================================================== 00:22:23.829 [2024-10-08T16:30:17.151Z] Total : 636.59 39.79 0.00 0.00 9977998.51 190.84 455653.93 00:22:25.777 00:22:25.777 real 0m7.945s 00:22:25.777 user 0m14.344s 00:22:25.777 sys 0m0.363s 00:22:25.777 16:30:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.777 16:30:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:25.777 ************************************ 00:22:25.777 END TEST bdev_verify_big_io 00:22:25.777 ************************************ 00:22:25.777 16:30:18 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:25.777 16:30:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:25.777 16:30:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.777 16:30:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:25.777 ************************************ 00:22:25.777 START TEST bdev_write_zeroes 00:22:25.777 ************************************ 00:22:25.777 16:30:18 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:25.777 [2024-10-08 16:30:18.807371] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:25.777 [2024-10-08 16:30:18.807596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91597 ] 00:22:25.777 [2024-10-08 16:30:18.973637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.036 [2024-10-08 16:30:19.233296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.605 Running I/O for 1 seconds... 00:22:27.540 20487.00 IOPS, 80.03 MiB/s 00:22:27.540 Latency(us) 00:22:27.540 [2024-10-08T16:30:20.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.540 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:27.540 raid5f : 1.01 20456.89 79.91 0.00 0.00 6231.57 2576.76 11975.21 00:22:27.540 [2024-10-08T16:30:20.862Z] =================================================================================================================== 00:22:27.540 [2024-10-08T16:30:20.862Z] Total : 20456.89 79.91 0.00 0.00 6231.57 2576.76 11975.21 00:22:29.506 00:22:29.506 real 0m3.613s 00:22:29.506 user 0m3.105s 00:22:29.506 sys 0m0.372s 00:22:29.506 16:30:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.506 16:30:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:29.506 ************************************ 00:22:29.506 END TEST bdev_write_zeroes 00:22:29.506 ************************************ 00:22:29.506 16:30:22 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:29.506 16:30:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:29.506 16:30:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.506 16:30:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:29.506 ************************************ 00:22:29.506 START TEST bdev_json_nonenclosed 00:22:29.506 ************************************ 00:22:29.506 16:30:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:29.506 [2024-10-08 16:30:22.473160] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:29.506 [2024-10-08 16:30:22.473330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91650 ] 00:22:29.506 [2024-10-08 16:30:22.640295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.765 [2024-10-08 16:30:22.904553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.765 [2024-10-08 16:30:22.904701] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:29.765 [2024-10-08 16:30:22.904748] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:29.765 [2024-10-08 16:30:22.904765] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:30.023 00:22:30.023 real 0m0.965s 00:22:30.023 user 0m0.691s 00:22:30.023 sys 0m0.167s 00:22:30.023 16:30:23 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.023 16:30:23 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:30.023 ************************************ 00:22:30.023 END TEST bdev_json_nonenclosed 00:22:30.023 ************************************ 00:22:30.283 16:30:23 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:30.283 16:30:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:30.283 16:30:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.283 16:30:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.283 ************************************ 00:22:30.283 START TEST bdev_json_nonarray 00:22:30.283 ************************************ 00:22:30.283 16:30:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:30.283 [2024-10-08 16:30:23.510325] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:30.283 [2024-10-08 16:30:23.510538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91681 ] 00:22:30.541 [2024-10-08 16:30:23.686505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.800 [2024-10-08 16:30:23.948050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.800 [2024-10-08 16:30:23.948231] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:30.800 [2024-10-08 16:30:23.948263] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:30.800 [2024-10-08 16:30:23.948278] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:31.367 00:22:31.367 real 0m0.982s 00:22:31.367 user 0m0.695s 00:22:31.367 sys 0m0.180s 00:22:31.367 16:30:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.367 16:30:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:31.367 ************************************ 00:22:31.367 END TEST bdev_json_nonarray 00:22:31.367 ************************************ 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:31.367 16:30:24 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:31.367 ************************************ 00:22:31.367 END TEST blockdev_raid5f 00:22:31.367 ************************************ 00:22:31.367 00:22:31.367 real 0m51.060s 00:22:31.367 user 1m8.116s 00:22:31.367 sys 0m5.773s 00:22:31.367 16:30:24 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.367 16:30:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:31.367 16:30:24 -- spdk/autotest.sh@194 -- # uname -s 00:22:31.367 16:30:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@256 -- # timing_exit lib 00:22:31.367 16:30:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.367 16:30:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.367 16:30:24 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:31.367 16:30:24 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:31.367 16:30:24 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:31.367 16:30:24 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:31.367 16:30:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.367 16:30:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.367 16:30:24 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:31.367 16:30:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:31.367 16:30:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:31.367 16:30:24 -- common/autotest_common.sh@10 -- # set +x 00:22:32.743 INFO: APP EXITING 00:22:32.743 INFO: killing all VMs 00:22:32.743 INFO: killing vhost app 00:22:32.743 INFO: EXIT DONE 00:22:33.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:33.310 Waiting for block devices as requested 00:22:33.310 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:33.310 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:34.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:34.246 Cleaning 00:22:34.246 Removing: /var/run/dpdk/spdk0/config 00:22:34.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:34.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:34.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:34.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:34.246 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:34.246 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:34.246 Removing: /dev/shm/spdk_tgt_trace.pid56902 00:22:34.246 Removing: /var/run/dpdk/spdk0 00:22:34.246 Removing: /var/run/dpdk/spdk_pid56667 00:22:34.246 Removing: /var/run/dpdk/spdk_pid56902 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57137 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57252 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57308 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57447 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57465 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57675 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57792 00:22:34.246 Removing: /var/run/dpdk/spdk_pid57899 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58027 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58140 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58185 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58227 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58303 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58420 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58897 00:22:34.246 Removing: /var/run/dpdk/spdk_pid58985 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59070 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59086 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59257 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59279 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59429 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59456 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59531 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59549 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59624 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59642 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59848 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59896 00:22:34.246 Removing: /var/run/dpdk/spdk_pid59985 00:22:34.246 Removing: /var/run/dpdk/spdk_pid61404 00:22:34.246 Removing: /var/run/dpdk/spdk_pid61624 00:22:34.246 Removing: /var/run/dpdk/spdk_pid61770 00:22:34.246 Removing: /var/run/dpdk/spdk_pid62435 00:22:34.246 Removing: /var/run/dpdk/spdk_pid62652 00:22:34.246 Removing: /var/run/dpdk/spdk_pid62798 00:22:34.246 Removing: /var/run/dpdk/spdk_pid63464 00:22:34.246 Removing: /var/run/dpdk/spdk_pid63800 00:22:34.246 Removing: /var/run/dpdk/spdk_pid63951 00:22:34.246 Removing: /var/run/dpdk/spdk_pid65369 00:22:34.246 Removing: /var/run/dpdk/spdk_pid65636 00:22:34.246 Removing: /var/run/dpdk/spdk_pid65788 00:22:34.246 Removing: /var/run/dpdk/spdk_pid67211 00:22:34.246 Removing: /var/run/dpdk/spdk_pid67469 00:22:34.246 Removing: /var/run/dpdk/spdk_pid67615 00:22:34.246 Removing: /var/run/dpdk/spdk_pid69033 00:22:34.246 Removing: /var/run/dpdk/spdk_pid69492 00:22:34.246 Removing: /var/run/dpdk/spdk_pid69642 00:22:34.246 Removing: /var/run/dpdk/spdk_pid71164 00:22:34.246 Removing: /var/run/dpdk/spdk_pid71430 00:22:34.246 Removing: /var/run/dpdk/spdk_pid71581 00:22:34.246 Removing: /var/run/dpdk/spdk_pid73100 00:22:34.246 Removing: /var/run/dpdk/spdk_pid73366 00:22:34.246 Removing: /var/run/dpdk/spdk_pid73516 00:22:34.246 Removing: /var/run/dpdk/spdk_pid75024 00:22:34.246 Removing: /var/run/dpdk/spdk_pid75528 00:22:34.246 Removing: /var/run/dpdk/spdk_pid75674 00:22:34.246 Removing: /var/run/dpdk/spdk_pid75823 00:22:34.246 Removing: /var/run/dpdk/spdk_pid76280 00:22:34.246 Removing: /var/run/dpdk/spdk_pid77046 00:22:34.246 Removing: /var/run/dpdk/spdk_pid77428 00:22:34.246 Removing: /var/run/dpdk/spdk_pid78134 00:22:34.246 Removing: /var/run/dpdk/spdk_pid78619 00:22:34.246 Removing: /var/run/dpdk/spdk_pid79423 00:22:34.246 Removing: /var/run/dpdk/spdk_pid79838 00:22:34.246 Removing: /var/run/dpdk/spdk_pid81863 00:22:34.246 Removing: /var/run/dpdk/spdk_pid82307 00:22:34.246 Removing: /var/run/dpdk/spdk_pid82765 00:22:34.246 Removing: /var/run/dpdk/spdk_pid84900 00:22:34.246 Removing: /var/run/dpdk/spdk_pid85391 00:22:34.246 Removing: /var/run/dpdk/spdk_pid85899 00:22:34.246 Removing: /var/run/dpdk/spdk_pid86977 00:22:34.246 Removing: /var/run/dpdk/spdk_pid87306 00:22:34.246 Removing: /var/run/dpdk/spdk_pid88263 00:22:34.246 Removing: /var/run/dpdk/spdk_pid88597 00:22:34.246 Removing: /var/run/dpdk/spdk_pid89557 00:22:34.505 Removing: /var/run/dpdk/spdk_pid89891 00:22:34.505 Removing: /var/run/dpdk/spdk_pid90574 00:22:34.505 Removing: /var/run/dpdk/spdk_pid90854 00:22:34.505 Removing: /var/run/dpdk/spdk_pid90922 00:22:34.505 Removing: /var/run/dpdk/spdk_pid90964 00:22:34.505 Removing: /var/run/dpdk/spdk_pid91223 00:22:34.505 Removing: /var/run/dpdk/spdk_pid91398 00:22:34.505 Removing: /var/run/dpdk/spdk_pid91498 00:22:34.505 Removing: /var/run/dpdk/spdk_pid91597 00:22:34.505 Removing: /var/run/dpdk/spdk_pid91650 00:22:34.505 Removing: /var/run/dpdk/spdk_pid91681 00:22:34.505 Clean 00:22:34.505 16:30:27 -- common/autotest_common.sh@1451 -- # return 0 00:22:34.505 16:30:27 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:34.505 16:30:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.505 16:30:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.505 16:30:27 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:34.505 16:30:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.505 16:30:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.505 16:30:27 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:34.505 16:30:27 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:34.505 16:30:27 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:34.505 16:30:27 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:34.505 16:30:27 -- spdk/autotest.sh@394 -- # hostname 00:22:34.505 16:30:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:34.764 geninfo: WARNING: invalid characters removed from testname! 00:23:01.324 16:30:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:02.258 16:30:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:04.789 16:30:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:07.358 16:31:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:10.642 16:31:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:13.203 16:31:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:15.737 16:31:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:15.996 16:31:09 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:23:15.996 16:31:09 -- common/autotest_common.sh@1681 -- $ lcov --version 00:23:15.996 16:31:09 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:23:15.996 16:31:09 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:23:15.996 16:31:09 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:23:15.996 16:31:09 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:23:15.996 16:31:09 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:23:15.996 16:31:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:15.996 16:31:09 -- scripts/common.sh@336 -- $ read -ra ver1 00:23:15.996 16:31:09 -- scripts/common.sh@337 -- $ IFS=.-: 00:23:15.996 16:31:09 -- scripts/common.sh@337 -- $ read -ra ver2 00:23:15.996 16:31:09 -- scripts/common.sh@338 -- $ local 'op=<' 00:23:15.996 16:31:09 -- scripts/common.sh@340 -- $ ver1_l=2 00:23:15.996 16:31:09 -- scripts/common.sh@341 -- $ ver2_l=1 00:23:15.996 16:31:09 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:23:15.996 16:31:09 -- scripts/common.sh@344 -- $ case "$op" in 00:23:15.996 16:31:09 -- scripts/common.sh@345 -- $ : 1 00:23:15.996 16:31:09 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:23:15.996 16:31:09 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.996 16:31:09 -- scripts/common.sh@365 -- $ decimal 1 00:23:15.996 16:31:09 -- scripts/common.sh@353 -- $ local d=1 00:23:15.996 16:31:09 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:15.996 16:31:09 -- scripts/common.sh@355 -- $ echo 1 00:23:15.996 16:31:09 -- scripts/common.sh@365 -- $ ver1[v]=1 00:23:15.996 16:31:09 -- scripts/common.sh@366 -- $ decimal 2 00:23:15.996 16:31:09 -- scripts/common.sh@353 -- $ local d=2 00:23:15.996 16:31:09 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:15.996 16:31:09 -- scripts/common.sh@355 -- $ echo 2 00:23:15.996 16:31:09 -- scripts/common.sh@366 -- $ ver2[v]=2 00:23:15.996 16:31:09 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:23:15.996 16:31:09 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:23:15.996 16:31:09 -- scripts/common.sh@368 -- $ return 0 00:23:15.996 16:31:09 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.996 16:31:09 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:23:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.996 --rc genhtml_branch_coverage=1 00:23:15.996 --rc genhtml_function_coverage=1 00:23:15.996 --rc genhtml_legend=1 00:23:15.996 --rc geninfo_all_blocks=1 00:23:15.996 --rc geninfo_unexecuted_blocks=1 00:23:15.996 00:23:15.996 ' 00:23:15.996 16:31:09 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:23:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.996 --rc genhtml_branch_coverage=1 00:23:15.996 --rc genhtml_function_coverage=1 00:23:15.996 --rc genhtml_legend=1 00:23:15.996 --rc geninfo_all_blocks=1 00:23:15.996 --rc geninfo_unexecuted_blocks=1 00:23:15.996 00:23:15.996 ' 00:23:15.996 16:31:09 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:23:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.996 --rc genhtml_branch_coverage=1 00:23:15.996 --rc genhtml_function_coverage=1 00:23:15.996 --rc genhtml_legend=1 00:23:15.996 --rc geninfo_all_blocks=1 00:23:15.996 --rc geninfo_unexecuted_blocks=1 00:23:15.996 00:23:15.996 ' 00:23:15.996 16:31:09 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:23:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.996 --rc genhtml_branch_coverage=1 00:23:15.996 --rc genhtml_function_coverage=1 00:23:15.996 --rc genhtml_legend=1 00:23:15.996 --rc geninfo_all_blocks=1 00:23:15.996 --rc geninfo_unexecuted_blocks=1 00:23:15.996 00:23:15.996 ' 00:23:15.996 16:31:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:15.996 16:31:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:15.996 16:31:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:15.996 16:31:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.996 16:31:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.996 16:31:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.996 16:31:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.996 16:31:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.996 16:31:09 -- paths/export.sh@5 -- $ export PATH 00:23:15.996 16:31:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.996 16:31:09 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:15.996 16:31:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:23:15.997 16:31:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728405069.XXXXXX 00:23:15.997 16:31:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728405069.s3TBmN 00:23:15.997 16:31:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:23:15.997 16:31:09 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:23:15.997 16:31:09 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:15.997 16:31:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:15.997 16:31:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:15.997 16:31:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:23:15.997 16:31:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:15.997 16:31:09 -- common/autotest_common.sh@10 -- $ set +x 00:23:15.997 16:31:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:23:15.997 16:31:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:23:15.997 16:31:09 -- pm/common@17 -- $ local monitor 00:23:15.997 16:31:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:15.997 16:31:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:15.997 16:31:09 -- pm/common@25 -- $ sleep 1 00:23:15.997 16:31:09 -- pm/common@21 -- $ date +%s 00:23:15.997 16:31:09 -- pm/common@21 -- $ date +%s 00:23:15.997 16:31:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728405069 00:23:15.997 16:31:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728405069 00:23:15.997 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728405069_collect-cpu-load.pm.log 00:23:15.997 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728405069_collect-vmstat.pm.log 00:23:16.933 16:31:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:23:16.933 16:31:10 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:23:16.933 16:31:10 -- spdk/autopackage.sh@14 -- $ timing_finish 00:23:16.933 16:31:10 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:16.933 16:31:10 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:16.933 16:31:10 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:17.192 16:31:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:17.192 16:31:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:17.192 16:31:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:17.192 16:31:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:17.192 16:31:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:17.192 16:31:10 -- pm/common@44 -- $ pid=93176 00:23:17.192 16:31:10 -- pm/common@50 -- $ kill -TERM 93176 00:23:17.192 16:31:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:17.192 16:31:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:17.192 16:31:10 -- pm/common@44 -- $ pid=93178 00:23:17.192 16:31:10 -- pm/common@50 -- $ kill -TERM 93178 00:23:17.192 + [[ -n 5264 ]] 00:23:17.192 + sudo kill 5264 00:23:17.201 [Pipeline] } 00:23:17.217 [Pipeline] // timeout 00:23:17.224 [Pipeline] } 00:23:17.239 [Pipeline] // stage 00:23:17.244 [Pipeline] } 00:23:17.260 [Pipeline] // catchError 00:23:17.271 [Pipeline] stage 00:23:17.273 [Pipeline] { (Stop VM) 00:23:17.286 [Pipeline] sh 00:23:17.564 + vagrant halt 00:23:20.854 ==> default: Halting domain... 00:23:27.426 [Pipeline] sh 00:23:27.718 + vagrant destroy -f 00:23:31.012 ==> default: Removing domain... 00:23:31.024 [Pipeline] sh 00:23:31.305 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:31.314 [Pipeline] } 00:23:31.329 [Pipeline] // stage 00:23:31.336 [Pipeline] } 00:23:31.351 [Pipeline] // dir 00:23:31.357 [Pipeline] } 00:23:31.371 [Pipeline] // wrap 00:23:31.377 [Pipeline] } 00:23:31.390 [Pipeline] // catchError 00:23:31.399 [Pipeline] stage 00:23:31.402 [Pipeline] { (Epilogue) 00:23:31.415 [Pipeline] sh 00:23:31.698 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:38.273 [Pipeline] catchError 00:23:38.275 [Pipeline] { 00:23:38.289 [Pipeline] sh 00:23:38.576 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:38.834 Artifacts sizes are good 00:23:38.842 [Pipeline] } 00:23:38.855 [Pipeline] // catchError 00:23:38.866 [Pipeline] archiveArtifacts 00:23:38.873 Archiving artifacts 00:23:38.985 [Pipeline] cleanWs 00:23:38.998 [WS-CLEANUP] Deleting project workspace... 00:23:38.998 [WS-CLEANUP] Deferred wipeout is used... 00:23:39.005 [WS-CLEANUP] done 00:23:39.006 [Pipeline] } 00:23:39.021 [Pipeline] // stage 00:23:39.026 [Pipeline] } 00:23:39.040 [Pipeline] // node 00:23:39.045 [Pipeline] End of Pipeline 00:23:39.077 Finished: SUCCESS